In literary sociology, texts have often been treated as a product of human history, and therefore as symptoms of the forces that have shaped them. This essay considers a single strain of computational literary studies that uses predictive modeling to locate differences between two corpora of texts, determining, for example, what makes a text literary, reviewable, or prize-worthy. This type of computational literary studies is sociological in nature but changes the epistemological status of the text from a symptom of the conditions of its production to a source of evidence supporting claims about human selection. In the ongoing »method wars«, some proponents of postcritique have allied themselves with computational literary studies due to its perceived machinic ability to read objectively and on the surface of texts; however, as this essay argues, the scale of computational methods and their ability to produce strong, generalizable theories complicate not only this alliance, but also the distinction between critique (characterized in this essay with reference to Jameson) and postcritique (characterized in this essay with reference to Sedgwick, Latour, Felski, Love, Best and Marcus), which rests on what role the text should play in our accounts of history. This essay responds to the seemingly confounding position that computational literary sociology takes within this debate by arguing that the relationship between the text and history is accounted for in predictive modeling in the first instance through a statistical hermeneutic. This hermeneutic, as is shown through a historization of computational literary sociology’s assumptions about humans and their relationship to the world, is inherently paranoid, imagining that human behavior emits historical signals that are retrievable from amidst the noise of the data. This essay pays particular attention to this construction – »signal in the noise« – in order to argue that computational literary sociology is fundamentally concerned with demystifying and finding meaning within the randomness of human behavior, which is why it cannot be so easily divorced from the tradition of critique. By conducting a brief survey of work in computational literary sociology (Dalen-Oskam, English, Moretti, So, Underwood), the essay argues that there are numerous ways of making meaning of the signal in the noise. One of these is to treat it as a symbol requiring interpretation, usually leading to an explanation that relies on the existence of something like the political unconscious. Another is to refuse critique in order to trace or assemble the actions that manufactured the appearance of the signal. In either case, the second instance of hermeneutical maneuvering that occurs in computational literary sociology is an answer to the question: how did the signal get into the noise? Also revealed through this survey is that attending to the noise, rather than the signal, is a productive way to leverage a unique affordance of computational literary sociology – its visualization of the relationship between the corpus and the signal that appears to order it. Ultimately this essay argues that despite computational literary sociology’s reconsideration of the role of the text, it continues to, in the critical tradition, treat the text as a material instantiation of history. Although computational literary sociology is not as postcritical as some might have thought, it still enables postcritical readings of a model’s outputs. Furthermore, the essay finds that what can become visible in computational literary studies is, first, the way that statistics has come to act – like ideology critique before it – as a nearly ubiquitous and therefore infrequently questioned tool for understanding human behavior and, second, that what is valuable in thinking computationally is the ability to pay attention to the »noise«, without which statistical models would necessarily flounder.