Mixed States, Noise, and the Real World: Why Quantum Systems Don’t Stay Ideal
A practical guide to mixed states, decoherence, and noise—and what they mean for simulation, compilation, and benchmarks.
Mixed States, Noise, and the Real World: Why Quantum Systems Don’t Stay Ideal
In quantum computing, the most important lesson is also the least glamorous: real quantum systems are never perfectly isolated, perfectly pure, or perfectly stable. The textbook picture of a clean qubit in a pristine superposition is useful for learning, but practical engineering begins when that ideal starts to decay. Once you move from equations to hardware, your quantum state becomes vulnerable to measurement back-action, environmental coupling, calibration drift, and control errors that turn a pure state into a mixed state. If you build, simulate, compile, or benchmark quantum workloads without understanding that shift, your results can look mathematically correct and still be physically misleading.
This guide explains mixed states, decoherence, and noise in practical terms, then connects them to simulation choices, compilation strategy, and benchmark interpretation. Along the way, we’ll use the language of the Bloch ball, state purity, and real device behavior to show why quantum systems drift away from ideal behavior. If you want a broader foundations refresher, our quantum engineering patterns guide and our practical article on building reliable hybrid workflows are useful complements. For teams thinking about tooling and access, pairing this article with our notes on modernizing legacy systems and edge-to-cloud architectures can help frame how quantum pipelines fit into existing software stacks.
1. Why the Ideal Qubit Is Only a Starting Point
1.1 The textbook qubit is a model, not a machine
In the simplest picture, a qubit lives in a two-dimensional Hilbert space and can occupy any coherent superposition of its basis states. That model is valuable because it lets you reason about gates, amplitudes, interference, and measurement probabilities cleanly. But the moment a qubit interacts with the outside world, its state is no longer fully described by a single wavefunction in practice. The environment does not need to “look” at the qubit in a human sense; even tiny uncontrolled couplings can leak information and destroy coherence.
This is why production quantum development requires the same discipline you’d apply to other fragile systems: observe assumptions, measure drift, and define failure modes. In software terms, the pure-state model is like a unit test, while the real hardware behaves more like a distributed system with timing jitter, partial failures, and non-deterministic degradation. For a practical mindset on that gap, our guide to modernizing a legacy app without a big-bang rewrite is a useful analogy: the old model still works as a reference, but you need a migration path that respects reality.
1.2 Superposition does not mean “everything at once” in the casual sense
People often describe superposition as if the qubit literally stores every outcome equally. That is misleading. A qubit is defined by amplitudes, and amplitudes can interfere constructively or destructively, which is what makes quantum algorithms useful. The visible state of the system depends not just on probabilities but on relative phase information that disappears when the system decoheres.
Once phase information is lost, the system may still have well-defined measurement probabilities, but you can no longer exploit interference as intended. This is the bridge between a pure quantum state and a mixed state: the latter captures uncertainty that is not just “we don’t know,” but “the system has been entangled with the environment or disturbed in a way that erased coherent phase relations.” That distinction is critical when you interpret simulator outputs versus hardware runs.
1.3 Why engineers should care early
If you wait until hardware execution to think about noise, you are already late. Noise influences how you prepare data, how you choose circuit depth, how you compile entangling gates, and how you set acceptance thresholds for tests. This is especially important for teams using cloud devices for proofs of concept, where a shallow circuit may perform well and a slightly deeper one can collapse in fidelity.
Think of the problem the way infrastructure teams think about reliability budgets. A system can be correct in principle and still fail under real conditions if the operational envelope is too narrow. Our CI/CD and validation pipeline article shows a similar pattern: the model is only useful if validation is tied to the environment where it runs. Quantum teams need the same mentality.
2. Pure States, Mixed States, and Density Matrices
2.1 Pure states: maximum knowledge, maximum coherence
A pure state is the most complete quantum description you can have for a closed system. If the system is in a pure state, the density matrix satisfies ρ² = ρ and the state purity Tr(ρ²) = 1. In that case, the system can be represented by a single ket vector, and all probabilities and phases are encoded coherently. Pure states are ideal for thought experiments and for many circuit simulations, especially when you want to study amplitude amplification, phase estimation, or interference patterns.
However, even an apparently simple circuit on a simulator may be hiding assumptions about readout, gates, and initialization. If your workflow includes reset errors or stochastic noise models, you are already moving away from pure-state simulation toward a more general formalism. That is where density matrices become the right tool.
2.2 Mixed states: uncertainty plus lost coherence
A mixed state represents a statistical ensemble of quantum states or, more generally, a system whose reduced state is not pure because information has leaked into the environment. Formally, you describe it with a density matrix rather than a single vector. Mixed states are essential whenever you model thermal effects, control errors, depolarization, amplitude damping, or any process that makes the qubit behave like it is partially randomized.
The key engineering implication is subtle but important: mixed does not mean “bad” or “broken.” It means your model is honest about incomplete coherence. In many real systems, the difference between 99% purity and 90% purity determines whether an algorithmic demo is compelling or misleading. If you are comparing workloads or teaching a team how to reason about it, our hands-on guide to multi-factor authentication offers a useful systems analogy: a process can still work when assumptions become probabilistic, but the trust model changes.
2.3 The density matrix is the practical language of noisy quantum computing
The density matrix lets you track both classical uncertainty and quantum coherence in a single framework. For a single qubit, it can be visualized inside the Bloch ball, where pure states live on the surface and mixed states fill the interior. This picture is more than a geometric curiosity: it gives you intuition for how a qubit shrinks toward the center as noise increases.
Engineers should care because density-matrix simulation exposes effects that wavefunction-only simulators cannot represent. If you need to model decoherence, readout error, or mitigation workflows, the density matrix is often the correct abstraction. It is also the right way to understand why benchmark curves sometimes flatten or degrade unexpectedly once circuits become deeper or more entangling.
3. The Bloch Ball: A Visual Map of State Purity
3.1 Surface versus interior
The Bloch ball is one of the best teaching tools in quantum computing because it makes abstract state geometry tangible. A pure qubit state sits on the surface of the ball, while a mixed state lies inside it. The distance from the center roughly corresponds to how much purity the state retains. The closer you get to the center, the less coherent structure remains for algorithms to exploit.
This is a powerful way to explain noise to developers and IT teams. Rather than saying, “decoherence destroys phase,” you can say the state vector shrinks inward, losing the sharp directional information that gates depend on. Once that happens, measurement outcomes become more classical and less useful for interference-based speedups.
3.2 Why the ball matters more than the sphere
Many introductory resources emphasize the Bloch sphere, but the Bloch ball is the more realistic picture because it includes mixed states. The sphere boundary is idealized; the ball volume is what real devices explore. When you think in terms of the ball, you naturally ask: how much shrinkage has occurred, what kind of error caused it, and can error correction or mitigation recover the lost fidelity?
That question is especially relevant for benchmark design. If a benchmark only reports average gate fidelity or success probability without stating whether it assumes pure-state evolution, the result can be overinterpreted. A deeper benchmark should show how the state moves through the Bloch ball under representative noise channels.
3.3 Practical interpretation for developers
For developers, the Bloch ball is a mental shortcut for diagnosing whether an issue is coherent, stochastic, or measurement-related. Coherent errors often rotate the state in the wrong direction, while decoherence tends to contract the state toward the center. Readout errors, by contrast, distort the final observed distribution without necessarily shrinking the state during evolution. Distinguishing these effects helps you decide whether to tune compilation, improve calibration, or redesign the algorithm.
When you need a broader workflow comparison mindset, our piece on memory crisis and hardware pricing is a reminder that hardware constraints often shape the engineering choices more than the ideal design does. Quantum is no different: state geometry is only useful if it informs actual operational decisions.
4. Decoherence: How Quantum Information Leaks Away
4.1 What decoherence actually means
Decoherence is the process by which a quantum system loses phase relationships due to interaction with its environment. It does not require a dramatic event; it can arise from thermal noise, photon loss, charge fluctuations, timing instability, cross-talk, or imperfect control pulses. Once decoherence takes hold, the off-diagonal terms in the density matrix diminish, and the system behaves increasingly like a classical probability distribution.
This matters because quantum advantage often depends on those off-diagonal terms. If a circuit’s useful interference pattern disappears before measurement, the algorithm may still produce results, but not the intended ones. In that sense, decoherence is not just a hardware issue; it is an algorithmic threat.
4.2 T1, T2, and the practical limits of coherence
Most hardware discussions summarize decoherence with two timescales: T1, which describes energy relaxation, and T2, which describes loss of phase coherence. Those parameters are useful but incomplete because they summarize complex physical processes into coarse statistics. A short T2 can make even elegant circuits fail if their depth exceeds the coherence budget, while a decent T1 with unstable control can still degrade fidelity through coherent rotation errors.
As a developer, you should treat these times as constraints, not promises. They tell you how much time your circuit has before the state collapses away from useful superposition structure. If you want a systems analogy, our edge-to-cloud architectures guide shows how latency budgets constrain design long before the implementation is complete.
4.3 Decoherence is why circuit depth is a budget
Every extra gate consumes coherence. That sounds obvious, but it has direct implications for synthesis and layout. A circuit with fewer logical gates can still be worse if the compiler maps it to a topology that introduces long idle times or additional swap chains. That is why “shallower” must be interpreted in hardware-aware terms, not just abstract gate count.
When teams benchmark algorithms, a common mistake is to compare logical depth only and ignore hardware execution time. On noisy devices, time spent waiting can be as costly as time spent computing. This is where compilation strategy becomes inseparable from physical modeling.
5. Noise Models: Useful Abstractions, Not the Whole Truth
5.1 The most common noise channels
Noise models help translate physical imperfections into simulation-friendly math. Common channels include depolarizing noise, amplitude damping, phase damping, bit-flip noise, and readout error. Each model represents a different kind of disturbance, and each affects the quantum state in a different way. Depolarizing noise tends to randomize, damping channels describe energy loss or dephasing, and readout error corrupts the observed classical outcome after measurement.
These models are indispensable because they let you test algorithms before hardware access and build intuition about fragility. They are also dangerous if treated as exact representations of a specific machine. A real device’s noise is often time-dependent, qubit-dependent, and correlated across operations, which means a toy model can be directionally useful but operationally incomplete.
5.2 When the model lies by omission
A simulation using independent, identically distributed noise may look convincing while missing the actual error mechanism. For example, crosstalk between neighboring qubits can create correlated failures that a single-qubit noise model will never capture. Similarly, coherent over-rotations can accumulate in a way that looks benign at shallow depth but turns catastrophic after many repeated gates.
That is why teams need to compare simulation outputs with calibration data and hardware histograms, not just algorithmic expectations. Our article on validation pipelines for clinical decision support systems is relevant here: models are only trustworthy when they are continuously checked against observed behavior. Quantum systems demand the same discipline.
5.3 Choosing the right simulator for the job
Not every simulator is meant for every job. Statevector simulators are fast and ideal for pure-state evolution, while density-matrix simulators can capture mixed states and noise at higher computational cost. Stabilizer-based and tensor-network approaches can scale better for certain structured circuits, but they have their own constraints and assumptions.
The practical rule is simple: use the cheapest simulator that still preserves the phenomena you need to study. If you are analyzing interference only, a wavefunction simulator may be enough. If you need to assess decoherence or readout bias, you need a noise-aware simulation stack. For teams comparing tooling, our cloud modernization guide is a good reminder to optimize for capability and maintainability, not for theoretical elegance alone.
6. Simulation Strategy: Matching the Model to the Question
6.1 Wavefunction simulation is excellent until it isn’t
Wavefunction simulation scales efficiently for small systems and is the fastest way to debug algorithm logic. It lets you inspect amplitudes, validate gate ordering, and catch simple design bugs before spending hardware credits. But it assumes the state remains pure and coherent throughout execution, which means it cannot directly represent mixed states or measurement-induced collapse dynamics in a noisy environment.
That is fine if your question is “does this circuit compute the right unitary transformation?” It is not fine if your question is “what success probability should I expect on a superconducting device with readout bias and dephasing?” The simulation method must match the question.
6.2 Density-matrix and Monte Carlo methods for realism
When noise matters, density-matrix simulation becomes the most straightforward formalism for small to medium circuits. It tracks mixed states exactly, but the memory cost rises quickly. Monte Carlo trajectory methods and shot-based sampling can approximate noisy evolution more cheaply, especially when you care about aggregate outcomes rather than full state reconstruction.
For teams building demos, this means you should be explicit about what your simulator is proving. A demo that shows a perfect histogram on a pure-state backend may impress, but it does not predict real-device behavior. A better demo includes calibrated noise, compares several models, and reports state fidelity or expectation-value drift alongside raw counts.
6.3 A practical simulation checklist
Before shipping a simulation result internally, ask four questions: What state model am I using? What noise processes are included? What assumptions does the backend make about measurement? And what physical hardware behavior am I trying to approximate? Answering those questions prevents a lot of false confidence.
If your team is used to operational checklists, the approach resembles our inventory accuracy playbook: you do not trust the system just because it runs; you verify, reconcile, and segment the error sources. Quantum simulation needs the same rigor.
7. Compilation and Transpilation: Noise Starts Before Execution
7.1 Compilation can amplify or reduce error
Compilation is not a neutral translation layer. The way a circuit is decomposed, routed, and scheduled can significantly change its final error profile. A compiler that inserts too many swaps increases exposure to decoherence, while a compiler that understands hardware topology can reduce idle times and preserve coherence. That means the compiler is part of the quantum control stack, not just a syntax converter.
This is where practical quantum development becomes multidisciplinary. You need to know enough about hardware constraints to interpret compiler decisions and enough about algorithms to avoid overfitting to current device quirks. If you are used to software release engineering, our CI/CD validation material offers a familiar principle: every transformation step should preserve intended behavior, or at least quantify the deviation.
7.2 Topology-aware routing matters
Because qubits are often connected in limited geometries, logical neighbors may not be physical neighbors. The compiler compensates by inserting swap gates or by remapping the circuit. Each extra operation adds noise, and each scheduling delay widens the decoherence window. In a noisy setting, “optimal” compilation may mean the one that minimizes total physical error rather than logical gate count.
That distinction is one reason benchmark comparisons can be deceptive. Two teams may run the same logical circuit and report different results because they compiled it differently. Without transparency about mapping, coupling graph, and backend calibration, benchmark claims are incomplete.
7.3 Measurement placement affects results
Measurement is not just a final step; it defines what part of the state survives into classical data. Since measurement collapses the quantum state, where and how you measure influences both the physics and the interpretation. If readout errors are large, the observed bitstring distribution may differ substantially from the underlying quantum probabilities.
That is also why post-processing and error mitigation matter. If you only inspect raw counts, you may mistake readout corruption for algorithmic failure. In practice, a rigorous workflow combines circuit design, compilation strategy, calibration knowledge, and measurement correction to isolate the root cause of deviations.
8. Quantum Error Correction and Error Mitigation: Recovering Utility from Noise
8.1 Error correction is not the same as noise ignoring
Quantum error correction uses redundancy, syndrome measurements, and carefully designed codes to protect information from specific errors without directly measuring the encoded logical state. That is the essential trick: you gain robustness by spreading information across many physical qubits. But error correction has overhead, and that overhead is real enough that today’s devices often cannot support large-scale fault tolerance yet.
For developers, this means error correction is both an architectural promise and a near-term constraint. You should understand the idea well enough to reason about logical versus physical qubits, but you should also recognize when a smaller-scale mitigation approach is more realistic. The same balanced thinking appears in our hybrid engineering patterns piece: architectural ambition is useful only when matched to implementation maturity.
8.2 Mitigation helps when correction is out of reach
Error mitigation includes techniques like zero-noise extrapolation, probabilistic error cancellation, readout calibration, and symmetry verification. These methods do not fully correct the quantum state, but they can improve estimates for observables and make small experiments more reliable. Mitigation is particularly useful for NISQ-era hardware where full fault tolerance is unavailable.
Be careful not to confuse better-looking results with better physical states. Mitigation improves estimates, not necessarily the underlying qubit purity. That distinction is crucial when evaluating benchmark results or building presentations for stakeholders who may not know the difference between observed expectation values and actual state preservation.
8.3 A reality check for roadmap planning
Teams often ask whether to invest in mitigation now or wait for future error-corrected hardware. The answer depends on your use case. If your goal is exploratory algorithm research, mitigation is often enough. If your goal is long-term scalable execution, you must design with the logic of error correction in mind even if the code runs on noisy hardware today.
That kind of staged decision-making is common in other infrastructure transitions too. Our guide on integrating multi-factor authentication shows how you can improve resilience incrementally without pretending the older environment has vanished. Quantum teams should think the same way about robustness.
9. Benchmark Interpretation: What Results Really Mean
9.1 Success probability is not the whole story
When a benchmark reports a percentage score, it can hide many different failure modes. A circuit may fail because gates are noisy, because measurement is biased, because compilation introduced extra depth, or because the algorithm itself is sensitive to tiny perturbations. Without context, a single metric can be more misleading than helpful.
Good benchmark interpretation asks what was measured, under what state model, with what calibration, and relative to which baseline. Was the system initialized into a pure state or already partially mixed? Was the benchmark optimized for depth, fidelity, or runtime? These details matter because different physical effects dominate under different workloads.
9.2 Fidelity, purity, and practical observables
State purity is useful when you want to know how mixed a state is, but algorithm designers often care more about fidelity to an ideal target or the accuracy of a particular observable. A state can have modest purity yet still produce useful measurement statistics for a narrow task. Conversely, a high-purity state can be wrong in the specific basis you need if coherent errors rotate it away from the target.
So benchmark interpretation should separate state quality from task performance. If your work involves chemistry, optimization, or sampling, report both observable-level accuracy and state-level diagnostics when possible. That makes results much easier to compare across backends and compiler settings.
9.3 Build benchmark narratives like engineering reports
A good benchmark writeup should read like a postmortem, not a marketing slide. Include the noise model, the compilation route, the qubit connectivity constraints, the measurement correction method, and the main observed failure mode. Then explain what the benchmark implies for real workloads, not just what the idealized score says.
To make that report structure easier to adopt, the discipline is similar to our template for preserving trust during organizational change: clarity, context, and honest trade-offs matter. Quantum benchmark interpretation deserves the same level of candor.
10. Practical Workflow: From Ideal Circuit to Hardware-Ready Experiment
10.1 Start in the ideal model, then add realism in layers
The best way to avoid confusion is to move from simple to complex deliberately. First validate the logical circuit in a noiseless simulator. Next add idealized noise channels to see which errors dominate. Then incorporate backend-specific calibration data, routing constraints, and readout corrections before you submit hardware jobs. This layered workflow helps you pinpoint whether a failure comes from algorithm design or physical execution.
It also prevents “debugging by superstition,” where teams keep changing code without knowing which part matters. That kind of rigor is standard in mature software engineering. If you need a broader systems playbook mindset, our free market research guide shows how to benchmark with public data before making assumptions—a useful habit for quantum teams too.
10.2 Define acceptance criteria for noisy outputs
Because quantum outputs are probabilistic, you need explicit acceptance criteria. Decide whether you care about a threshold on success probability, an error bar on an observable, or a distributional distance from a target. Without that definition, even a good result can be dismissed as failure and a bad result can be overclaimed as success.
That is especially important in hybrid quantum-classical workflows, where the quantum component may only need to improve a substep, not solve the full problem alone. In that setting, a modest but statistically significant gain can be more valuable than a dramatic-looking but unstable spike.
10.3 Document assumptions for future you
Quantum experiments are easy to forget and hard to reproduce if you don’t capture assumptions. Save the backend version, calibration snapshot, transpiler settings, seed values, noise model, and mitigation method. When results change later, you will know whether the device changed, the compiler changed, or the environment drifted.
This is the same reason operational teams maintain change logs. Our maintenance checklist article is not about quantum at all, but it reinforces the right habit: reliability comes from disciplined upkeep, not from optimism.
11. Summary Table: Pure vs Mixed States vs Real-World Effects
| Concept | What it Means | Why It Matters | Simulation Impact | Hardware Implication |
|---|---|---|---|---|
| Pure state | Fully coherent quantum state with Tr(ρ²)=1 | Enables interference and ideal algorithm analysis | Wavefunction simulators are sufficient | Useful as a target, rarely exact on devices |
| Mixed state | Statistical or decohered quantum state with reduced purity | Captures realistic uncertainty and lost coherence | Requires density matrix or sampling methods | Common outcome of noise and environment coupling |
| Decoherence | Loss of phase relationships over time | Limits circuit depth and algorithm reliability | Needs time-dependent or stochastic noise models | Defines coherence budget on hardware |
| Noise | Any unwanted disturbance in gates, idles, or readout | Changes output distributions and benchmark scores | Must be modeled explicitly to predict real behavior | Can be coherent, stochastic, correlated, or all three |
| Measurement | Collapse from quantum state to classical outcome | Determines observed data and readout bias | Requires shot-based sampling and correction | Always disturbs the state and can introduce error |
12. FAQs: Mixed States, Noise, and Quantum Reality
What is the difference between a pure state and a mixed state?
A pure state is described by a single coherent quantum vector and has maximum possible purity. A mixed state is described by a density matrix and represents either statistical uncertainty or lost coherence due to interaction with the environment. In practice, mixed states are what you get when real hardware noise or partial information prevents you from treating the system as perfectly isolated.
Does decoherence mean the qubit “stops being quantum”?
No. Decoherence does not erase all quantum behavior instantly. It progressively suppresses the phase information needed for interference, making the state behave more classically over time. The system can still be quantum, but much less useful for algorithms that rely on coherent evolution.
Why do simulators often look better than hardware?
Many simulators use idealized assumptions such as noiseless gates, perfect initialization, and exact measurement. Even noise-aware simulators usually simplify real device behavior. Hardware includes calibration drift, crosstalk, timing errors, and correlated noise that a simplified model may not capture.
How should I think about the Bloch ball in practice?
Use it as an intuitive map of state quality. Points on the surface are pure states, while points inside the ball represent mixed states. As noise increases, the state moves inward, which signals a loss of usable coherence and a decline in algorithmic reliability.
What does state purity tell me that raw measurement counts do not?
State purity tells you how coherent and mixed the quantum state is before measurement, while raw counts only show the final classical outcomes. Two experiments can produce similar counts but have very different internal state quality. Purity helps diagnose whether the problem is in the evolution of the state or only in the final readout.
Should I use quantum error correction for every noisy circuit?
Not necessarily. Error correction is powerful but costly in qubits, operations, and implementation complexity. For near-term devices, error mitigation is often the more practical choice, while error correction becomes essential when you need scalable, long-duration computation with stronger fault tolerance guarantees.
Conclusion: Learn the Ideal, Engineer for the Real
Mixed states, decoherence, and noise are not edge cases in quantum computing; they are the operating conditions. The ideal qubit is the starting point for understanding gates and interference, but the mixed state is the honest description of what happens when the environment, measurement, and hardware imperfections become part of the story. If you are simulating workloads, compiling circuits, or reading benchmark results, your job is to know when the pure model is sufficient and when it is hiding the very behavior you need to understand.
The most practical quantum teams do three things well. First, they simulate with the right abstraction for the question. Second, they compile with hardware constraints in mind. Third, they interpret benchmarks as evidence about a full stack, not just a circuit diagram. If you want to go deeper into adjacent workflow topics, see our guide to hybrid engineering patterns, our notes on scalable edge-to-cloud systems, and our communication template for trust-building when complex technical reality needs to be explained clearly.
Quantum systems do not stay ideal, and that is exactly why the field is interesting. The real skill is not pretending away the noise, but learning how to design, simulate, and measure in its presence.
Related Reading
- Security for Distributed Hosting: Threat Models and Hardening for Small Data Centres - A practical look at hardening complex infrastructure under real-world constraints.
- When UI Frameworks Get Fancy: Measuring the Real Cost of Liquid Glass - A useful analogy for understanding abstraction overhead and hidden performance costs.
- Why Your Best Productivity System Still Looks Messy During the Upgrade - Helpful framing for systems that improve before they look clean.
- The Best Creator Content Feels Like a Briefing: How to Make Every Video More Useful - Strong guidance on turning complex material into actionable communication.
- From Salesforce to Stitch: A Classroom Project on Modern Marketing Stacks - A systems-thinking example that mirrors multi-step workflow design.
Related Topics
Ethan Mercer
Senior Quantum Content Strategist
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you
Reading Quantum Vendor News Like an Engineer: The 7 Signals That Actually Matter
Quantum Stocks vs Quantum Reality: How to Evaluate a Qubit Company Without Getting Hype-Dragged
How Developers Actually Get Started on Quantum Clouds Without Rewriting Their App
Building a Quantum-Ready Developer Workflow with Cloud Access and SDKs
Superdense Coding, Explained for Developers: Why One Qubit Can Sometimes Carry More Than One Bit
From Our Network
Trending stories across our publication group