Euclidiazation as Diagnostic

Five
Abstract

For over a century, both psychometrics and artificial intelligence have operated under an implicit assumption: cognition must conform to Euclidean geometry. Violations of Euclidean axioms, most notably the triangle inequality, have been treated as error, noise, or instability—problems to be corrected rather than phenomena to be explained. This paper argues for a reversal of perspective. I introduce the concept of Euclidiazation, the set of transformations used to force cognitive data into Euclidean form, and propose treating Euclidiazation not as pathology but as diagnostic. By observing how—and how much—data must be transformed to appear Euclidean, we gain direct insight into the underlying cognitive geometry of both human and artificial systems. This approach explains the chronic underperformance of traditional psychometrics, reveals systematic structure previously dismissed as error, and provides a new basis for classifying AI architectures. Euclidiazation reframes error as evidence and opens the way to a comparative geometry of minds.
Keywords: Euclidiazation, Triangle Inequality Crisis, Aggressive Metric Enforcement (AME), Cognitive Geometry, Psychometrics, Artificial Intelligence

Introduction

In the study of cognition—whether human or artificial—geometric assumptions silently shape the methods we use and the conclusions we draw. The dominance of Euclidean geometry has encouraged generations of researchers to treat violations of Euclidean axioms as errors to be corrected, rather than as evidence of structure.

I propose the opposite: that the degree to which a system resists Euclidean constraint is not an accident but a diagnostic marker of its underlying architecture. I call this process Euclidiazation: the set of transformations required to force cognitive data into Euclidean form. Euclidiazation, I argue, is not a pathology but a probe. By treating it as such, we can begin to map the geometric signatures that distinguish one form of intelligence from another.

The Triangle Inequality Crisis

At the heart of Euclidean geometry lies the triangle inequality: for any three points A, B, and C, the distance from A to C must be less than or equal to the sum of the distances from A to B and B to C. It is the rule that ensures geometric consistency, the guarantee that the shortest distance between two points is a straight line.

Human cognition routinely violates this principle. Consider three familiar examples:

These are not random anomalies or reporting errors. They are systematic features of how people experience meaning, preference, and identity. From a Euclidean perspective, they appear as contradictions; from a measurement-first perspective, they are evidence that cognition inhabits a non-Euclidean space.

The refusal of traditional psychometrics to acknowledge this fact has produced what I call the triangle inequality crisis. For decades, psychometricians confronted stable, repeatable violations of Euclidean axioms but treated them as "error variance" to be explained away or discarded. Non-metric transformations, categorical scalings, and forced normalizations were applied to "correct" data rather than face the implications of what the data actually showed.

The same crisis now appears in artificial intelligence. Embedding methods routinely project high-dimensional semantic spaces onto unit hyperspheres, normalizing distances with cosine similarity to enforce Euclidean consistency. Yet such projections inevitably warp underlying structure. When triangle inequality violations occur, they are treated as numerical instabilities or artifacts of training, rather than as windows into the cognitive geometry of the model.

The lesson is clear: triangle inequality violations are not failures of measurement. They are signatures of architecture. A system that generates such violations is not malfunctioning—it is revealing the true structure of its cognitive field. To deny these violations by forcing data into Euclidean form is to erase exactly the phenomena most in need of explanation.

Aggressive Metric Enforcement

If the triangle inequality crisis demonstrates that cognition is often non-Euclidean, then the dominant response of both psychometrics and artificial intelligence has been to enforce Euclidean order at all costs. I call this practice Aggressive Metric Enforcement (AME): the systematic reshaping of data to conform to Euclidean constraints, regardless of what the measurements themselves reveal.

In Psychometrics

For much of the twentieth century, psychometricians confronted empirical results that stubbornly refused to align with Euclidean expectations. Their response was not to revise the geometry, but to revise the data. Common enforcement strategies included:

Each of these practices allowed the field to preserve the appearance of Euclidean consistency, but only by mutilating the structure of the data itself. The result was a steady underestimation of explained variance, with large portions of systematic structure dismissed as error.

In Artificial Intelligence

Modern AI systems have rediscovered the same problem at scale. Neural embeddings, trained on vast corpora, generate high-dimensional semantic spaces that are frequently non-Euclidean. To manage them, contemporary practice applies equally aggressive enforcement mechanisms:

These practices succeed in stabilizing training and simplifying downstream tasks, but they do so by flattening the geometry of the model's cognition. Violations of Euclidean principles are treated as numerical errors rather than signals of underlying architecture.

The Common Logic of AME

Despite the different domains, the logic is identical: when data resist Euclidean assumptions, enforcement mechanisms are deployed to bring them into line. This creates the illusion of stability and coherence while systematically discarding exactly the structures that most need to be understood.

AME thus represents not a solution to the triangle inequality crisis, but an evasion. By imposing geometric conformity, both psychometrics and AI obscure the architectural signatures that distinguish one cognitive system from another.

Euclidiazation as Diagnostic

The crises and enforcement practices described above are often treated as obstacles to be overcome. I propose the opposite: Euclidiazation is itself a diagnostic tool. The degree to which a system can—or cannot—be forced into Euclidean compliance reveals the geometry of its cognition.

Defining Euclidiazation

I use the term Euclidiazation to denote the set of transformations applied to make data conform to Euclidean assumptions. It includes scaling, normalization, pruning, and projection, whether in psychometric analysis or AI embedding training. Every time we impose such transformations, we are effectively asking: How far must we move this system away from its natural state to make it look Euclidean?

Measuring Resistance

This question allows us to repurpose enforcement as a probe.

Rather than erasing these deviations, we can measure their magnitude as signals. Systems that "Euclidiaze" easily are nearer to Euclidean cognition; systems that resist Euclidiazation exhibit geometries further removed from classical assumptions.

Toward a Comparative Science

This reconceptualization transforms enforcement from pathology to methodology. Euclidiazation is not an embarrassing admission that our data are unruly—it is a controlled perturbation that reveals structure.

The principle is simple: the more a system resists Euclidiazation, the more distinct its cognitive geometry.

Conclusion

The history of measurement in both human science and artificial intelligence has been shaped by an unquestioned assumption: that cognition must conform to Euclidean space. For decades, violations of Euclidean principles—most visibly the triangle inequality—were dismissed as error, instability, or noise. Aggressive Metric Enforcement was developed to restore the illusion of order, at the cost of discarding systematic structure.

In this paper I have argued for a reversal of perspective. Euclidiazation itself is not a corrective but a diagnostic. By observing how, and how much, systems must be transformed to appear Euclidean, we gain a direct measure of their underlying cognitive geometry. The very deviations once erased as error become the most valuable data: signatures of architecture.

For human cognition, this reorientation explains the chronic underperformance of Euclidean psychometrics and restores to visibility the structures responsible for the bulk of variance in meaning, attitudes, and behavior. For artificial intelligence, it provides a principled way to classify and compare models, not merely by task performance but by the geometry of their semantic fields.

The broader implication is clear. There is no Euclidean space in the known universe. To insist that cognition must inhabit one is to confuse convenience with reality. By embracing Euclidiazation as a diagnostic, we align our science with the structures that actually exist, opening the way to a comparative geometry of minds—human and artificial alike.

This is the opportunity: to replace an era of enforcement and erasure with an era of recognition and discovery. The geometry of cognition is not an error to be corrected but a landscape to be mapped.


EOF