A major challenge in neuroscience is reconciling idealized theoretical models with complex, heterogeneous experimental data. We address this challenge through continuous-attractor networks, which model how neural circuits represent continuous variables such as head direction or spatial location through collective dynamics. Classical continuous-attractor models rely on continuous symmetry in the recurrent weights to generate a manifold of stable states, predicting tuning curves that are identical up to shifts. However, mouse head-direction cells exhibit substantial heterogeneity in their responses, seemingly incompatible with this classical picture. We demonstrate that mammalian circuits could nevertheless rely on the same dynamical mechanisms as classical continuous-attractor models. We construct recurrent neural networks directly from experimental head-direction tuning curves that exhibit quasi-continuous-attractor dynamics, then develop a statistical generative process quantitatively capturing the structure of tuning heterogeneity. This enables large-N analysis, where we show through dynamical mean-field theory that these networks become equivalent to classical ring-attractor models, with Mexican-hat interactions and continuous symmetry that is spontaneously broken, leading to bump states. In the seemingly disordered weights, the continuous symmetry essential to classical models is reflected through eigenvalue degeneracies, positioning spectral structure as a target for detecting continuous-attractor circuits in connectome data. We extend this framework to two-dimensional symmetries, constructing grid-cell models that similarly reduce to classical toroidal attractors. Our work demonstrates that the dynamical mechanisms of classical continuous-attractor models may operate not only in small brains or idealized systems but also in complex mammalian circuits.