Why Academic Hiring Sucks Fat Ass

There is a terrible amount of caution in the hiring of academics, I think; and I really mean terrible.  Having now spent the past several weeks applying for non-academic jobs, where I hear back quickly with either a request for an interview or a rejection–or, lacking a quick response, can safely assume a rejection–I see with some greater clarity just how stupid the academic application and hiring process is.

I get that you don’t want to give a tenure track job to someone who is a bad fit.  Okay.  So maybe, instead of having super shitty, contingent, one-year positions and then mildly shitty, bust-your-ass doing every possible extracurricular service you can think of tenure track appointments, we find a nice middle ground: two or three year contracts with the possibility of acceleration into tenure-track based upon internal department reviews.  Relatively low-risk for the university and the department, good experience for young academics with sufficient reward (financial and CV-building) to do something like move across the country.  Let’s worry about whether or not they will be good, long-term contributors to your program after they get a trial; you’ve got a better chance at medium-term weather forecasting than you do long-term academic prediction, given that you have really almost nothing to go on in the latter.

Also, let’s cut back on the hyper-specificity of our application requirements, shall we?  Cover letters should be about the academic, not about how the academic will fit the mold you have created–since, oftentimes, that mold will only fit one or two people in reality, you can bet your sweet sorry asses that 99% of people applying for the job are lying through their teeth to make it sound like they are in fact a good fit.  In the same vein, can we forgot about the asinine and specific AOS requirements, particularly in philosophy?  The problem with our educational system is not that we lack philosophers who specialize in environmental ethics and also post-colonial feminism; philosophy by its nature is general, and philosophers ought to be something of generalists.  They’re the ones good at educating students to think.

Pretty sure we can throw out requirements for student evaluations, as well.

I know the root of the problem right now is not the departments themselves, but the administrations; and I know the administrative bloat is killing almost every school’s budget for new hires–but the drawn-out, painful, agonizing process of hiring probably is not helping individual departments’ positions within universities.  Maybe I’m just bitter because I’m fairly certain that if a department just gave me a chance, I could and would prove myself worthy.

Anywho… back to grinding away on a book no one will ever read, that probably won’t get me a job anywhere.

Advertisements

The Social Construction Gap

If you are reading this, it’s fair to assume you are familiar with the term “social construct”.  You are, after all, alive post-2016 and on the internet (unless this has been compiled into Krampus’ Greatest Hits: A Book of Wonderful Sagacity and published to worldwide acclaim, which I’m sure will happen any day now).  Though society has evidently been constructing things for as long as society has existed, the term “social construct” did not come into vogue until the last half of the 20th century.  I won’t bother with the history of it; rather, I just want to point out that, as a formal concept, it is rather young.  It should therefore be little surprise that the meaning of “social construct” is pretty poorly understood–especially since the concept is intrinsically incoherent.

At the heart of this poor understanding is a radical divide: that is, a divide between nature and culture, or between biology and society, if you prefer.  The former pairing–nature (a broader term) and biology (a narrower one)–is recognized as important and, in a certain sense, “infallible”, but also considered stupid and clumsy.  That is, biology is what it is and actively seeking to change it carries a taboo (and for good reason–but more on that some other time).  Meanwhile, society appears sophisticated, “rational”, but often cruel, oppressive, and evil.  Biology is often believed an unconscious influence or even determinant on the structuring of society–sometimes to the point that it is responsible for the evil and cruelty of those social constructs.  At other times, society is believed to have “corrected for”–or at least to be capable–the “mistakes” or “clumsiness” of the biological.

Among the oddities surrounding this divide is that it appeared before anyone successfully made sense of it, and, point of fact, no one really has made sense of it yet.  Some intellectuals noticed that nature and biology form one kind of thing, and culture and society (or perhaps, even more ambiguously, “psychology”) form another kind of thing, and the two were pushed apart until we could figure out what to do with them.  We still haven’t figured much out, however, and I’m afraid we’re actually getting farther away from figuring it out, too: mostly because they should not have been pushed apart in the first place, and now people are carrying on as though this is the normal, proper state of things.

The idea of the “social construct”, as commonly understood today, has followed from the forced separation of nature and culture: that is, social constructs are understood to be concepts or institutions (i.e., patterns of practice and organization) which exist solely on the basis of human mental activity and formed discursively through social interactions.  Often, these constructs are imbued with moral normativity.  In other words, the construct establishes a standard or acceptable set of behaviors; anything outside of them is considered bad.  This seems quite sensible, given the premises–after all, biology and nature present no moral codes, but we have such codes, and if they don’t come from nature, then they must come from society.

Practices such as marriage and its traditionally attendant attributes–monogamous, heterosexual, child-bearing and rearing oriented–are often considered as examples of social constructs, and as ones having morally normative weight.  Within that, in particular, we can focus on monogamy, a normative concept currently under some scrutiny and facing opposition from certain quarters, which will be used throughout this essay as a particular example.

Disclaimers

First, I’ll be quite honest: I am a man and I am wholeheartedly pro-monogamy.  It is a practice I consider a part of the highest ideal of a loving relationship.  In recent opposition to monogamy, I do feel threatened–as would anyone, I think, who finds his or her ideal being argued against; it is something in which I believe, and in which I would like others to believe.

Second, the impetus for this article is the work of Carrie Jenkins.  In particular, I have in mind her article “Modal Monogamy” (I know she has written a lengthier book, and I intend to address it more fully and thus more fairly at some point).  Carrie is a clear, persuasive writer.  I follow her on Twitter and from what I’ve seen, I quite like her candor, her thoughts on mental health and academia, and on academia in general.  But I think she’s wrong in her work, and I think it a fruitful exercise (if nothing else) for me to explain why.  For starters, she tries to bridge the gap between the biological and the social–but as we’ll see, that gap is an abyss, and no bridge is strong enough to span it.

The arbitrary hypothesis

Typically attending the claim that a traditional or conventional position of moral normativity is nothing other than the product of social construction is the implication that this normative construction is oppressive, and therefore bad, and, since it is moreover arbitrary, it is also legitimate to supplant.  The first claim, that the norm oppresses, requires a view of human nature that centers around negative freedom as essential: namely, that radical autonomous independence from external constraints forms the core of what makes someone human, and any activity contrary to this–except in service of the preservation of another’s autonomy–constitutes oppression.  Good actions are actions that promote the ability of individuals to exercise their autonomy; oppressive actions are, by contrast, evil.  Because all but the most basic of norms entail some degree or another of constraint, sooner or later, they all come to be seen as oppressive.  I will challenge the validity of the oppressive-claim in the next section.

In the meanwhile, I want to note an inconsistency between the first claim, that normative constructions are oppressive, and the second, that they are arbitrary.  That is, if a social construct is entirely arbitrary, it seems absurd to claim that it is oppressive.  If whatever is being oppressed has a claim not to be oppressed, then the social construct must be something bad; which is to say that in whatever realm the social construct is operating, it cannot truly be arbitrary because it is itself in conflict with something normative, unless, of course, the claim not to be oppressed is itself equally arbitrary.

Bosch_bigButtStuff
Shit gets real weird.

In order for something to be arbitrary, or to be performed arbitrarily, it must have no inherent relativity to anything else.  That something could exist in an arbitrary fashion is a very dubious proposition–for everything, insofar as it is, seems also to be in relation to other things.  For an action to be performed in an arbitrary fashion, the act must be severed from its ordinary context; for actions, too, insofar as they exist, exist in relation.  But human beings do have the ability to artificially, within extreme limitations, impose barriers between an act and its relatives: as when a dictator arbitrarily decides to burn down the impoverished parts of his city to expand the beautiful and wealthy–this is not entirely arbitrary, but is arbitrary in the sense that it severs its relations to the proper ordination of ruling.

So let us take the supposed social construct with normative ruling highlighted earlier, monogamy, and consider what it would mean to say that this is arbitrary: that, against the proper ordination of sexual relationships, some group or force–likely the ubiquitous bogeyman, patriarchy–has imposed on the social order this rule or law.  One could argue that this imposition stems ultimately from a biological male drive to protect its progeny, or from a kind of “selfish instinct”.  Conversely, one could argue–as some have–that females possess a biological drive to polyamory, to increase their chances of reproduction and to always choose the best mate possible.  [Point of fact: anisogamic reproduction has typically fostered, in mammal species, male promiscuity and female selectivity.]

That the terms “biological drive” and “instinct” are virtual nonsense–to the point that even when clearly defined, which is rare, they possess no actual significance to real forces in nature–seems not to bother many who employ them.  Rather, they serve as “explanatory principles”, said in the pejorative sense, by which I mean unknowns presumed as unquestionable “first principles” invoked to bring an end to a discussion.

At any rate, the supposed arbitrariness of the monogamous norm is based on the premise that the male biological urge is, for no good reason, given formalized preference over the female.  Although, if we turned this around the other way, it would be equally arbitrary.  If an arbitrary patriarchal imposition is oppressive to the biological impulse of the female, the converse “matriarchal” imposition would be oppressive to the biological impulse of the male.

Of course, those who are arguing against monogamy are not actually arguing for anything matriarchal–polyamorists are not specifically interested, as near as I can tell, in procreating with a multitude of partners (and it would not be great for the gene pool, in the long run, if they did).  The real interest is in sex with a multitude of partners, and sex which is severed from any reproductive associations–which desire seems not to be related to this supposed biological impulse, which, in evolutionary terms, serves the continuation of the species.

This is a common sleight of hand for social constructionists: claim that what is socially constructed is arbitrary, oppressive, and point to some biological basis as justification for an alternative social construction–which it turns out is equally arbitrary and oppressive–which biological basis has no actual connection to the newly-proposed paradigm.

Foundations and morality

So what is a social construct?  Let’s start with what it isn’t: an arbitrary concept or institution on which a group of old white men sat around a table and developed as a tool for oppressing women and non-whites.  Nor is it a habituated thought or idea which human brains have co-incidentally evolved to create for themselves to make sense of otherwise irrational experience–the “survival mechanism” thesis (in fact, I would say very few human concepts are of this sort; it would be extremely inefficient, evolutionarily speaking–nor is efficiency the sole criteria of “good” or “better”, in this case).

Okay: so why is it a social construct?  That is, what exactly is being “built” or “constructed”?  The immediate implication in the term is that the result is something artificial, which is something that would not come to be in the way that it is on its own; intervention is required, and specifically intervention which reorganizes things other than how they would normally be.  Implicit is a notion of violence–not physical damage, necessarily, but forcing things against their nature.  A skyscraper does not organically grow, but its materials must be forced, re-shaped, welded, altered again and again, and balanced against one another, in order to stand: this process is construction.

Alright: so why is it a social construct?  In order for something to be social, it requires the co-operative interaction of a plurality of individuals.  Even war requires some co-operation (slaughter is not a social activity), albeit a very hostile kind.  The plurality has to be engaged in something common.

Well then: what is the social construct?  An artificial concept or institution co-operatively forced into existence by a plurality of individuals.  We can only accept the existence of social constructs if we already accept that society exists as something separate from biology; that is, social constructs appear valid only if we allow as normal the divide between nature and culture.  Why should this be the case?  Why should the separation of the two be the default position?  As aforementioned, there are different results from the two–but, here’s the counter-hypothesis: society is natural.

That is, society may deviate from nature, it may oppose nature, or it may cohere with nature, enhance nature; but its root is nature itself.  Human beings are naturally social–it is an inexorable ordination of our biological, physical, corporeal being that we exist in relation to other humans.  That things produced socially would all be constructs and therefore artificial presupposes that there are no natural ways of conceptual or institutional social production.

For this reason, I prefer the term “social constitution”, as “constitution” is a broader term than “construction”–it can mean either natural or artificial.  Moreover, social constitution, as a process of developing concepts or institutions within society, has a continuity with the individual process of ideation whereby we develop our individual conceptual frameworks, the key difference being the making-public of these concepts by species-specifically human linguistic communication; but I think, there, I am wandering a bit outside of the pre-defined boundaries of this essay.

Following from the separation of biology and society, and the relegation of all norms to social constructs, is that morality falls in the gap–or, as it really appears here, the abyss.  If moral norms are entirely the product of human intervention, then they really are arbitrary, whether they are based on biological facts or not–after all, the biological facts instill no preference in us, except, perhaps, to favor the ones that favor us.  If, on the other hand, moral norms at the very least can revolve around an understanding of the natural, then we can find a non-arbitrary basis for judging our practices right or wrong.  That is, normativity cannot reduce to the strictly biological, but it can develop in coherence with the natural.

Love and monogamy

In her book, What Love Is and What It Could Be, Jenkins defines “romantic love” as “ancient biological machinery embodying a modern social role” (p.82)–and here it is, specifically, that I think she tries to bridge the abysmal gap between the mistakenly-separated nature and culture–suggesting that we have an evolutionarily-instigated impulse to pursue sexual intimacy (as a reward system to promote procreation) which has been transmogrified through societal structures into something else.  It is this “something else” which is at issue, being a supposed social construct.

In her article, “Modal Monogamy”, Jenkins argues against the concept that “dyadic exclusivity” (i.e., two and only two people exclusively involved with one another) is essential to a romantic love relationship (“modal monogamy”–that the only possible love relationship is metaphysically constituted by such a dyadic exclusive romantic love).  Central to her thesis is the concept of “romantic love” as “(at least in part) a socially constructed kind”.

In short: Jenkins’ argument against normative monogamy depends upon her concept of romantic love, and her concept of romantic love depends upon the divide between society/culture and the biological/natural.  Consequently, given my position that there exists no such divide (except by an artificial separation of the kind the scholastics called a distinctio rationis, subsequently and mistakenly presumed to be a distinctio realis), I have to oppose her definition of “romantic love.”

To describe this phenomenon, with which we are all familiar as a feeling, as a social construct fulfilling a primordial biological impulse is, to be frank, silly.  It rests upon some notion of the “natural human”, a pre-societal, pre-cultural being of which we have nothing but the scarcest knowledge (since such a being is, by its very definition, pre-historical, and of which therefore we have only forensic and no semantic indications) as well as a notion of culture and society as things which are artificial and wholly separate from whatever is natural.

Rather than try to deconstruct the feeling of romantic love into separate and mechanistic biological and societal causes, let’s examine the phenomenon itself: that is, what is the feeling itself?  Succinctly described, all “love” has as its core a desire for union with the beloved.  We are pained when that feeling is frustrated and pleased when the feeling is satisfied (unless, of course, the expectation was exaggerated beyond the reality, in which case we are disappointed).  This is true of the love we have for a piece of pecan pie (attended by a scoop of vanilla ice cream, of course) as it is for a pet, as it is for a human being.

Obviously, however, the kind of union differs for each object.  I do not want to eat my pet, and I do not want to have sex with pecan pie (and anyone who does is not really seeking sex, but masturbation, which is a different thing altogether).  Identifying the kind of union that we want in romantic love will go a long way to helping us understand what romantic love really is–and why and how monogamy is a legitimate moral norm.

[Incidentally, there is a weird inversion which occurs in Jenkins’ “Modal Monogamy” article, in which she points out–correctly–that many people base their modal monogamous belief on (or conflate it with) a moral monogamous belief.  It seems quite clear to me that whatever one’s moral position on monogamy, it ought to stem from its ontological status, not vice versa.  But I’ll spare you all the digression into an Aristotelian/Thomistic investigation of act and potency… sorta…]

Two kinds of union are patently desired in romantic love: sexual and emotional.  To some extent, the emotional often seems more important: sexual union tends to result in relatively short-lived pleasure, whereas the emotional pleasure of an intimate relationship tends to last much longer, and to be more pervasive in one’s life.  But there is also a third kind of union, which we tend to do a poor job of recognizing, which is union in thought.  It is uncomfortable when we discover that our romantic partners–intended or actual–have beliefs which oppose our own, or want things in life which we do not.  While we do not want to be of literally one mind, we do want to be headed in the same direction.  Our beliefs and our desires orient the trajectory of our lives (to at least some extent), and having different trajectories means that, sooner or later, the union will dissolve.  A failure to attain union in thought results quite often in a separation at both the emotional and the physical levels.

None of this, of course, proves that monogamy is a metaphysical necessity to the feelings of romantic love.  It is unquestionable that someone can simultaneously desire unity with a plurality of individuals at all three levels.  What is questionable, however, is whether someone can successfully accomplish such unity with a plurality of individuals.

I think the answer is “no”.  You can certainly have strong emotional attachments, sexual attraction, and even conceptual unity, with a plurality of individuals.  But the conceptual unity has to be broad, and therefore is generic; the emotional and the sexual unity, if with a plurality of individuals, is inescapably fragmentary.  That is, if I am sexually and emotionally involved with more than one person, none of those people are receiving the fullness of my sexual or emotional self.  The unity is only partial, and being incomplete is metaphysically inferior to being complete.

In other words, to take a phrase from Karol Wojtyla, I think that romantic love requires a “total gift of self.”  If someone I love is having or acting on desires for someone other than myself, then a part of her is not being given, and the union is incomplete; and vice versa.

So while it is absolutely true that there is no metaphysical necessity behind monogamy, and that the normativity of monogamy is a socially-constituted institution, it deserves to be a norm because it directs us towards a better fulfillment of our natural desire for complete union.  And before anyone objects, “Ah, but my desires cannot be fulfilled by sexual/emotional/intellectual union with just any one person”, that is absolutely true–but they also cannot be fulfilled by a thousand sexual, emotional, or intellectual partners.  Desire is by its nature indeterminate, general, and open to always more and other.  You can always want something more or something other than what you have–especially if your desires are not subordinated to the belief that you can have a more perfect union through monogamy.

Shitting Bull

I said above that social constructionists commonly claim that what is socially constructed is arbitrary, oppressive, and subsequently point to some biological basis, as justification for an alternative social construction, that has no actual connection to the newly-proposed paradigm.  A problem with this is that the social constructionist thereby provides a false but seemingly legitimating claim, which then becomes adopted and part of the conceptual framework for individuals and social groups.  Because it is presumed as normal that culture is one thing and biology another, and that there are no real or essential connections between the two–such that we can establish non-arbitrary norms–we start to believe some real bullshit.

And if we’re going to stop believing in the bullshit, that means we need to understand that culture comes from nature, and can either cohere with it or contradict it–and if we want our culture to be coherent with our nature (presuming that being coherent is superior to being incoherent), we need to understand what human nature really is.  Which ultimately entails understanding how we understand–not an easy task and something really difficult to communicate.  But at any rate, until this monumental undertaking can be accomplished, we can probably help ourselves in the meanwhile by closing the mental gap between the biological and the social, rather than trying, with great but foolhardy earnest intentions, to bridge it; for all our social constructions ultimately crumble into the abyss.

Divorcing Academia: Freedom or Despair?

I have spent two years on the academic job market and had only a very few first-round interviews, and one campus visit.  I did not bother counting how many rejections; let us just call it a large number, nearing if not past 100.

Additionally, I have applied for at least 90 non-academic jobs in the same time-frame, probably more.  There, the closest I came was the final three for a job I had to convince myself I wanted, and, in the aftermath, am (still) very glad I did not take.  Admittedly, I have turned down many interviews, because I applied for the job without first investigating the company, and, investigating after receiving the interview request, said, “No thanks, charlatans/hucksters.”

But my heart has never been in applying for any non-academic job.  I am not being an arrogant son-of-a-bitch–at least, not too much of one–when I say that I am good at the meat of academia: I have three published articles, two (soon to be) published books, two book reviews, and three articles currently under review; another article almost ready to submit, and a third book for which I have about 50,000 words written, all of this in less than a year from defending my dissertation.  Granted, my articles are not published in “high-impact” journals.  Being a mixed medievalist-phenomenologist-semiotician means I am on the edge of everyone’s acceptable content-range.  Nevertheless, I think it’s hard to say that’s a bad publishing record; my books, at least, are both with highly reputable academic publishers.

I am also, from most of the feedback I’ve gotten (students and peers alike), a good teacher.

And yet, still, no one wants to hire me.

Expectations_grande

It could be that my doctoral program isn’t extremely well-known; but it is known somewhat, and respected–enough that I should receive attention from at least someone, somewhere.

It could just be that the market is particularly bad for my AOS/AOC this year–though the tendency away from metaphysical and towards “practical” topics of philosophy seems unlikely to ebb any time soon–and that, in a few months as the 2018-19 cycle starts, the market will be flush with jobs fit for me.

But I cannot wait for that.  It seems I may have to break up with academia… and I mean really and truly.  Not just “taking a break” and “exploring my options”.  I think I need to acknowledge that she is a faithless bitch and cut my ties.  Will this be freeing?  Will it cause me to despair?  Can I ever really be happy outside academia?  My thoughts, my research, my writing, my teaching; in some sense these seem like children, to me.  If I divorce the academy, will I still be able to be a father to them?  Or is it that I can only see them on the weekends, perhaps the occasional weeknight?

I do not think I will really feel free, to be honest, because the commitment is not externally imposed, but the consequence of an internal desire.  I would have to change myself; and I do not see that happening.

I interviewed for a last-minute, poorly-paying, one-year instructor position earlier this week.  It is a marginal step up from being an adjunct.  I’ll know their decision in a few days.  I have a couple other, long-shot applications out on the market, for which I have no expectations.

In Defense of Postmodernism

Words, as they are used, often do not make sense.  This does not, however, make those words’ uses nonsense.  If I say, for instance, that “we are in deep water, so we had better get a shovel,” this does not make sense, but it is not nonsense.  The mixing of metaphors confuses a meaning, but it does not deprive the sentence of meaning altogether.

In light of this very simple premise, I am going to make two points about postmodernism:

  1. Most of what is called postmodernism, including a great deal of gender studies, does not really make sense, but neither is it nonsense.
  2. Most uses of the word “postmodern” are themselves nonsensical.

Our first story begins with a wide chasm: on one side, there stand the “ordinary” people, including the empirically-minded, scientifically-grounded.  On the other side, we find so-called postmodern academics, residing predominantly in the humanities, especially literature, and the social sciences.  Each has studied increasingly complex and semantically-sophisticated texts for at least a decade, and every new conceptually-laden term, of which there is an ever-growing dictionary, widens the chasm.  This canyon of divide was started quite a few hundred years ago, by men of the Enlightenment, such as Thomas Hobbes, David Hume, and Jean-Jacques Rousseau–even though these men would likely find the present-day beliefs of the so-called postmoderns to be unintelligible–for they held as a premise that nature (or fact) and culture (and value) were essentially different and unrelated spheres of human activity, knowledge, and existence.

But empiricist Enlightenment writing, even at its most abstruse, seldom strayed far from the vernacular.  It held to a belief in human nature, and in the existence and importance of nature generally, even if it thought this nature was not essentially connected with culture.  Their concern in this separation was a practical one: which of the two separate spheres will control the other.  Contrariwise, the postmodernists, though still concerned with control, have wandered quite far from common language and nature both.  The linguistic obscurity can quite probably be laid at the feet of Martin Heidegger, who should be considered one of the fathers of a genuine postmodernism.  The denial of nature is a more complex thread, but one which develops out of Darwinian evolutionary theories, positivism, and the radical events of the twentieth century through which culture rapidly complexified, distancing itself even farther from the natural in ever more-oppositional ways.

I won’t bother with nature, here–too complex.  But language…

Heidegger was a controversial figure from the first day he came into the public eye (or perhaps even before).  His first major work which was published, Being and Time (1927), quickly became famous and quickly confused a lot of people.  It speaks, as you might expect, of being and of time, but with a radical new presentation which sparked no small amount of debate, including whether his work was nonsense.  Such accusations were not rare.  After all, the man wrote sentences like this: “World-time is ‘more Objective’ than any possible Object because, with the disclosedness of the world, it already becomes ‘Objectified’ in an ecstatico-horizonal manner as the condition for the possibility of entities within-the-world” (Sein und Zeit 419/471).  To most people, I imagine, that sounds like gibberish.  To someone who has made an extensive study of Heidegger’s other writing–especially his courses and lectures from 1925-1930–it makes more sense.  But even with such study, that sentence poses interpretative challenges.

Nevertheless, Heidegger captured the attention of countless thinkers in the twentieth century.  His thought was new and exciting.  He had an aura of mystery that was not squelched even by a postwar suppression of his ability to teach, on account of his affiliation with the Nazi party.  Thinkers still flocked to his work and often strove to meet him when they had the opportunity; even Frenchmen who had suffered at the hands of the Germans, such as Jean-Paul Sartre, who considered himself Heidegger’s ally in thought.  But many others, even those who rejected his ideas, adopted his manner: especially the school of Critical Theory, founded by Max Horkheimer–a contemporary of Heidegger’s, who attended some of his lectures in the 1920s.  Though Horkheimer fundamentally disagreed with Heidegger, he and his school–comprising such members as Theodor Adorno, Jürgen Habermas, and Herbert Marcuse–nevertheless took lessons from Heidegger’s treatment of language.  For much of Heidegger’s philosophy revolved around his re-interpretation of common words, his (oftentimes disputable) interpretation of their etymologies, and the easy possibility to construct new, multi-dimensional, compound words in the German language.  His was a search for words that would make sense, of difficult and abstract concepts that did indeed deal with multiple dimensions.

But those who followed his style, if not his thought (and many who followed aberrant interpretations of his thought), did so often from the perspective that culture is entirely a social construct, and has no relation to what is natural; from the presupposition that human persons are primarily what exists in the cultural realm, and what they are as biological ought to be forced to adapt to the cultural.  Add in a heavy dash of Marxism–keeping in mind that the school of Critical Theory grew up in Nazi Germany, against which one of the cultural forces was Marxist Socialism–as well as the linguistic semiology of Ferdinand Saussure, and the postmodernist “Theorists”, capital T, are born; or, rather, socially-constructed out of intellectual artifice.

The resulting jargon-laden books and articles are immensely difficult to understand, and when understood, often lack clarity or precision.  They frequently do not make sense because the objects of their reference have no genuine grounds, like a continual mixing of highly abstract metaphors.  But they are not nonsense.  The word “performative” sounds silly, because it is, particularly when applied outside the context of artistic performance.  But it is not nonsense.  Nor is talking about “performative masculinity”; all this means is acting in a way which corresponds to a concept of what it means to be masculine.  That actually makes sense.  What does not is the presupposition that masculinity is a purely social, artificial construct, and that therefore masculinity is itself constituted through such performance.  But even that is not nonsense.  It is the result of a theory which, while wrong, is not unintelligible.  Presupposing the truth of its foundations, it is quite coherent, and would make sense to someone well-studied in its concepts and terminology.

Consequently, Peter Boghossian and James Lindsay, in denominating their attempt at a Sokal-style hoax, “The Conceptual Penis as a Social Construct”, as nonsense (a word they use 17 times in their post-hoc report) is counterproductive hyperbole.  The Sokal hoax and its follow-up in Fashionable Nonsense likewise failed to aim at the correct target: for while Sokal more poignantly needled the terminological ambiguities of Theory, and particularly exhibited its readiness to accept whatever means promoted its agenda, especially support from science, the problem with Theory is not the conclusions that it reaches (and there are plenty of serious Theorists who do not promote the absurdities you will see from New Peer Review), but the foundations on which it rests.

And so this brings us to our second story: for the foundations on which so-called postmodern Theory rests are not really postmodern at all, in any meaningful sense.  To be post-modern would mean to be after what is modern; and from an intellectual, philosophical, academic point of view, modernity is defined by its first and crucial presupposition: namely, that what we know are our own ideas.  In other words, the direct object of my knowledge is the idea that I alone possess.  Knowledge is therefore essentially private, subjective, and only incidentally and occasionally capable of being shared with others.  This premise is equally true of Cartesian rationalists and Lockean empiricists, Humean sceptics and believers in Berkeley, and it is true of the vast majority of so-called postmodernism, as well.  Mathematical representations or organizations of empirical observations alone consistently avoid the subjectivization of experiential knowledge which characterizes modern and pseudo-postmodern philosophy alike.

Thus I said that Heidegger could be considered a father of a genuine postmodernism because he patently denies this idealism (even though many, in following him, unconsciously seem to adopt it).  His philosophy of Being-in-the-World, of Dasein, denies the initial presupposition of subjective-objective opposition upon which modernity was founded.  This includes the denial that the human being’s context-driven development of personhood is independent of (or should be independent of) the biological and the natural, which idea he criticizes implicitly in his lecture on Aristotle’s Physics, his lecture on the “Age of Ideology”, and at length in his Letter on Humanism.

peirce
C.S. Peirce (1839-1914)

Ironically, the more important father of postmodernity died before Heidegger had written a single word of those texts: namely, Charles Sanders Peirce.  The full story of how his semiotics transcends the subject-object divide with more clarity and success than Heidegger achieved is a lengthy one.  In short, his theory of signs shows the possibility of an essential continuity of the universe, from the most fundamental particles to the most abstract concepts.  As a result, he repudiates idealism: specifically, its nominalistic denial that the mind is capable of knowing extramental relations.

In contrast, what is most often today called postmodernism–authors such as Barthes, Deleuze, Lacan, Derrida, Foucault, Rorty, and so on–is in fact nothing other than ultramodernism.  It takes the Enlightenment division of nature and culture, and in consequence of the Darwinian destruction of the concept of fixed natures, runs unimpeded into nominalistic idealism.

Yes: it is absurd.  It is far removed from common experience.  But so is advanced mathematics, and quantum physics, and neuroscience.  The truth is, there are cracks in the foundations of the physical sciences as well.  The kind of scientism advocated for by the “New Sceptics” stands on the ground, as opposed to the cloud of abstraction in which we find Theory; were this scientism to look down, though, instead of screaming at the clouds, it would see that its ground is a melting ice floe.

Motes, planks, and eyeballs: you get the idea.

Tone Deaf and the Impotent Sadness

The End of Semester, End of Year Post that Indicates the Continued and Likely-to-Continue Awfulness of Everything

or

I Spilled My Thoughts on This Page and This Horror is the Result

By now, an awful lot has been written about the election and why it is that Trump won.  Much of it is horribly reactionary (like Trump himself), much of it inane, and much of it written in the very spirit that persuaded enough of the country (most of it, by landmass) to vote against the various ideals which Hillary represented.  Other pieces have been quite good and insightful, critiquing that very hyperbolic tendency which has been the chief failing of the regressive left.

None of this writing, of course, changes the fact that the United States of America elected a man who is a notorious liar, a braggart, someone who readily flip-flops on any position to his advantage, and who, in the course of this unprincipled behavior, may have shackled himself irrevocably to white ethnonationalists who share deeply disturbing commonalities with the German Nazi party.  I would not be surprised if Richard B. Spencer saw himself as in the mold of Adolf Hitler, and hoping to follow in his path to positions of leadership.

But this essay is not about the election, nor about white supremacists, or nationalism, or the deplorable state of our political system, its parties, or any such temporal and (it is to be hoped) changeable phenomena.  Rather, this essay is about habit.

Something about Perspective

Most of us, upon hearing the word “habit”, think of some mundane daily routine or action.  I have a habit of going to bed late.  I have a habit of scratching at my beard while I am thinking.  I habitually drink two cups of coffee in the morning.  Habits may range from the most trivial–cracking one’s knuckles, say–to the most important routines of a life: prayer, saying “I love you” to a spouse, charitable giving, or how one goes about working.

perspective2Like many of our English words, habit comes from Latin: proximately habitus, meaning a thing’s condition, and more remotely, habere, the verb meaning “to have, to hold”.  The Latin habitus was often used to translate the Greek ethos–connected to ēthos (character) and ethikē (ethics)–meaning something much more like our English habit, although narrower in focus.  Many of our habits begin and persist without conscious decision.  For the Greeks, especially Aristotle, habit was principally and primarily the result of consistent and deliberately-chosen action, to the point where it became automatic.  But much like the modern appropriation of the word ethos, habit also means having a kind of worldview: for the kind of person you are, the individually developed persona of the common human nature, determines the way in which you view the world.

So when I say that ours is a society of really bad habits, I do not mean that we chew our nails or smoke cigarettes.  Rather I mean that we have formed ourselves poorly in regards to how we both view and hold ourselves in relation to the world.

These poor habits appear as emergent from what I can only describe as an insipid attitude towards pleasure and pain: namely, that the former is the end goal of all our actions and the latter an evil to be avoided at nearly any cost.  But press the average person in conversation about what he or she means by the term “pleasure”, and you are likely to hear little more than some vague description of a “good feeling”; conversely, “pain” receives little more verbiage than “a bad feeling”.  Despite this amorphous understanding, we spend a great deal of time pursuing pleasure and striving to avoid pain, or its lesser cousin, discomfort.

I may be in the minority in this, but I think directing our lives by principles we do not really understand is a bad way to live.

Pleasure and Pain

What is pleasure?  Some may say a good feeling; more advanced sophists may say it is the result of dopamine’s activity in the brain.  The latter explanation actually tells us less of importance than does the former.  “Feeling” is a deeper and richer concept than a neurochemical reaction.  The feeling typically called pleasure undoubtedly incorporates such a reaction–but it is a part, and to take it for the whole is simply asinine.

Pleasure is, rather, the immediate subjective occurrence consequent to something experienced as good.

What is pain?  Some will likely say a bad feeling–or perhaps, the result of a “chemical imbalance”, the body’s notification of a wound, or the feeling of loss.  This lattermost is the best, as it comprises all the rest: for pain is the immediate subjective occurrence consequent to something experienced as bad.

Academic Pain

At a certain point in the hunt for an academic job, it becomes difficult, for many (like myself), to maintain the belief that you do not suck at what you do.  Since late 2014, I have been turned down or ignored for over 30 professor or instructor positions and fellowships.  In the past year, I looked outside academia, applying to roughly 35 companies for positions roughly suitable to my skillset and experience.  One of them offered me a job–sort of–but wanted me to start at an impossible date.  I recently put in 2 non-academic applications (they take so much less time) and have 5 academic openings on my “to-do” list.

Between both academic and non-academic, I have had fewer than 10 interviews.  Those 5 openings will sit on my list till the last minute, partially because they’re tedious, but also, I must admit, because I’m a bit afraid of another 5 rejections.  After all, maybe I suck.

At the same time, I am a published author, with 1 and likely 2 books soon under contract with reputable academic publishers, as well as a sprinkling of articles and recently a book review.  I wrote the majority of my highly regarded, well-praised, 187,000 word dissertation in less than a year.  Maybe I’m actually amazing.

The truth is likely somewhere in-between.  More importantly, even if I am amazing, I am also unwanted.  My dissertation was on a medieval thinker’s metaphysics and epistemology.  I’m currently working in phenomenology and semiotics.  I don’t play with the hot topics: feminism, ecology, social and political theory, bioethics, non-Western thought, or any other such.  They are not interesting topics.  They are movements.  I am fixated on the fixed, looking at the currents only insofar as they help us to see the regular patterns.

But still I wonder–am I maybe not that good at what I do?  So bad, in fact, that I cannot even be trusted to do competently things vaguely related to what I do?

The Slippery Slope to Self-Importance

For those of you unaware, academia currently produces roughly 400-500% more job-seeking PhDs than there are full-time jobs open every year.  We are the architects of our own demise.

“Don’t tell me the U.S.A. went down the drain because of Leftism, Knotheadism, apostasy, pornography, polarization, etcetera etcetera.  All those things may have happened, but what finally tore it was that things stopped working and nobody wanted to be a repairman.” – Walker Percy, Love in the Ruins.

Every academic I know hates grading.  It is a popular time for academics to get on Twitter (or write a blog post) and spend hours complaining about how much they hate grading, especially how long it takes.  A professor with whom I used to live would often remark that students have always known less than they ought, but that nowadays they know nothing.  They consistently provide ample proof of this, especially in their essays.

This semester, as usual, I tried to leave some of the best final essays for last.  It is nice when your grading can end on a good note.  To my disappointment, the final two fell below expectations.  This was actually true of nearly every essay I graded in the past week.  One exceeded what I anticipated, marginally, but the expectations were pretty low to begin with, so little cause for celebration there.

ernst-europeafterrainIn fact, looking back at the whole semester, the disappointment was consistent.  It is not my students’ fault–they have been poorly educated in the kind of thinking necessary to doing well in the humanities, likely, their whole lives–but it is certainly their problem.  And yet, if I grade them justly, I will undoubtedly suffer blowback, either directly from the department or indirectly from student evaluations and online reviews.

It is frustrating to me.  Here, I see the problems so clearly.  For instance, it is evident that if my students were ever taught grammar, it was long ago, and the lessons were not taught in a way that stuck.  I do not expect explicit knowledge of grammatical terms–say, a dangling modifier, or to be able to understand me if I say, “the appositive construction”–but it would be nice if they had, at least, the habit of not writing with ubiquitous and poorly-used comma splices.  But what can I do?

“Seeing problems clearly” may be the universal curse of the academic.  We all do really know things, and more often than not, this leads us to believe that we know a great deal more than we do.  I say to myself, quietly, in my head, all the time: “Ah, but you are a philosopher.  You are not like those other ‘experts’ out there, overly-specialized, attempting to reduce all things to the narrow paradigms of their own specialty.  You study wisdom!” and so on.  All true, really; but just because I maintain the openness innate to true philosophical inquiry and I study wisdom (whatever that means) does not mean that I have attained it, or that the solutions I envision to our complicated problems will actually produce results truly good.  I like to think they would.  But we will never know until someone puts me in charge.  So we will never know.

So even now, I sulk, like Achilles in his tent (Read a book!), frowning over the injustice of lesser academics having been awarded higher places.

Bad Professor Club

While in grad school (oh hey, I can say that now…), my fellow students and I referred to ourselves as the “Bad Grad Students Club”–largely because of an overpreening peer who insisted that unless you were reading a bare minimum of three scholarly articles every day, you were not really being a good student.  As most of us transitioned to being ABD adjuncts, we became a Bad Professors Club–mostly because we would decide to more-or-less wing our classes, rather than prep, so that we could go drink at the bar up the street, instead.  What can I say?  They had a terrific happy hour.

Nowadays, I find myself a bad professor of another sort.  For one thing, having relocated, I no longer have the familiar cadre of fellow delinquency and booze enthusiasts.  This has much improved the lot of my liver, I imagine.  For another, I formerly taught at a school that, though moving in the STEM direction, was still predominately a liberal arts institution.  Though most of the students I had in my classes were not majoring in the humanities, a strong-enough core curriculum meant that they were suffused with enough education and atmosphere of the humanities to attain (at least on occasion) a deeper-than-superficial grasp of the issues discussed.

Meanwhile, I spend my non-teaching-duty days consumed in the writing of a book likely to end on a handful of shelves and cracked open only half as many times, a book dealing with the sophisticated topics of semiotics and phenomenology, slipping through Latin and German technical terminology, exploring the  esoteric writings of a never-fully-repentant Nazi and a manic depressive (both of them with notorious adulterous affairs) as an antidote to both the materialistic neuroreductionistic and the angelistic deus ex machina explanations of human knowledge.

I am a bad professor.  Not because I do not know my material, not even because I am not interested in it; but I am a bad professor for these students.  They need a disciplinarian.  They need someone to break their bad habits, habits grown in an insular and truthless (not “post-truth”) world.  Instead, I ignore their bad habits, and continue to pass them along (albeit begrudgingly, and with a low grade), because I am convinced that some day, I can become the “successful” academic I have long awaited myself to be.

A bad habit.

The Intoxication of Mild Pleasure

I am no stranger to indulgence.  I have gone too far with alcohol, tobacco, even some of the gentler illegal substances–and more, besides.  But these pleasures are typically laden with a context that gives them purpose.  For a time, I drank not to feel; but mostly, I drank (and smoked, though that takes on a life of its own) to enjoy better the company of others.  Even sex can–and I think ought to be–a means for mutual betterment.

Where I lapse mostly into the pursuit of pleasure for pleasure’s sake is in those pleasures more mild.  I may swallow my fire, but I take my poison slow.  By this I mean: the feeling of a girl’s thigh under my hand, a marathon of mindless TV, an engrossing video game, hot showers, the incessant music by which I block out the noise of the world, and always the ability to control the objects to which I am exposed.

This, I think, is common.  I am no exception to the rule.  These mild pleasures, in themselves–really, they are comforts–do no grave ill to us, in moderation.  But we are drunk on them, all the time, and find the sobriety of discomfort horrifying.

We find a prominent example of this discomfort and its attendant horror in the vocal Left’s reaction to the Trump presidency.  No pleasure has yet been deprived yet the mere thought of its possibility is enough to discomfit to near the level of garment-rending and hair-tearing.  Infantile wailing and gnashing of teeth remains a perfected and oft-practiced art.

I think that so much can be made of the presidential election shows a deep problem: a nation of 300+ million cannot reasonably agree on the wide bevy of issues we have placed in the hands of Washington and a return to governmental decentralization is necessary.  But that is another issue altogether; indeed, interruptions to our comfort seldom take so dramatic a form.  Typically, they may be assuaged, smoothed over, and our comfort restored in a day, an hour, a minute, a few seconds.  Woke up on the wrong side of the bed; just get a good night’s sleep, and tomorrow usually improves.  Internet service is out; use your phone’s 4G service until restoration.  Noisy neighbors?  Use your headphones (okay, sometimes that doesn’t work).  Need a new coat?  You can get it for 50% less than anywhere else, from Amazord dot netweb, delivered to your door in two days–or less!

“But why not?” you interject, borderline indignant at the perceived slight, “Why, Mr. Dr. Prof. Krampus, should we not enjoy these things, should we not be comfortable?  Aren’t you writing this from your Macrohard Expanse III, while pumping Amazord Muzak into your ears to block out noisy neighbors and city business and such?”

Indeed I am–as stated, I am no exception.  For I am soft and fragile like the rest of you, wrapped in my cocoon of comforts; if I am in any way an exception, it is that I have taken to hanging mirrors on its interior walls.  Self-absorption does not mean becoming lost in some introspection; rather, it means becoming ultraselective about the boundaries put upon one’s own “world”.  I am self-absorbed, and the boundaries, the walls, of my world, are mortared by bad habit.

Locating the Source

Where do these bad habits come from?  Are we victims of circumstance, of nature?  Genes, family?  Upbringing, destiny?

There are certain unverified stories said to be so good that, if they aren’t true, ought to be.  One such story concerns the Catholic writer and apologist, G.K. Chesterton, an Englishman who wrote in the early part of the 20th century.  When asked by the Times of London, among other authors, to write an essay on the topic of “What’s wrong with the world?”, his response was:

Dear Sirs,

I am.

Respectfully yours,
G.K. Chesterton.

leda_swanPut less civilly, we could also quote an inexplicably insightful madman in Walker Percy’s Love in the Ruins: “You want to know your trouble?  You don’t love God, you love pussy.”

It is a philosophical claim the defense of which stands outside this article’s rambling ken, but the self and the source of one’s love are about as closely identifiable as any two elements in a human person can be.  What is my problem?  I don’t love God.  I love pussy.

God or Pussy?

First of all, “pussy” isn’t always sex.  It may be a video game.  It may be art.  It may be the cause of racial equality.  It may be the warmth of comfortable surroundings.  “Pussy” is that which distracts us, diverts us from a fuller, better way of being human.  “Pussy” is that out of which we build our cocoons.  In contrast, “God” isn’t always God, though one might argue that it ought to be.  God is, to paraphrase Christ, the Light, the Truth, and the Way.

How are we to walk that Way?  How can we see by that Light?  How might we grasp that Truth?  The first step, I think, is to muster the courage to poke a hole ourselves in the fragile walls of our cocoons and make ourselves look out.

And keep looking.

Why Chance Depends Upon Continuity

In his later writings, Charles Sanders Peirce often incorporated chance as an essential part of his theory of the universe.  Chance, as the “absence of cause”, he says (mistaking this to be the definition of Aristotle), is what allows for the evolutionary development of law and of habit-taking.  This necessity of “spontaneity, chance, play”, is what he calls tychism.  I do not see how chance can in any way be primary, except as a consequence of the principle of potency, and specifically in our experience and experimental/observational capacities, of the potency coextensive with the materiality of something.

Chance is not a something in its own right, but a derivative following upon the variability of material eliminations.  I.e., as the potency of matter is determined through this or that act, different chances arise.  Determination of potency is a prerequisite to the occurrence of chance, inasmuch as what we mean by any chance event is not the absence of a causal event, but the interruption of one usual chain of causality following predetermined ordination by an intersection of some usually unrelated chain of causation, following its own predetermined ordination.

This both allows for the variation evident in evolutionary process–inasmuch as the abnormality occasioned in the intersection need not be an abnormal occurrence, but only abnormal to the object(s) involved–as well as “free will”, since the freedom of the will consists in the ability to choose lesser known-goods over higher known-goods.  If by “chance” Peirce intends the inclusion of what is simply non-deterministic and therefore “spontaneous”, this non-determined and spontaneous event occurs in the election of a lesser over a higher; for it is characteristic of the determinate that it always follows from a kind of brute actuality, the kind of brute actuality which follows from feeling (as opposed to “reason” or the species-specifically human semiotic process).

Therefore, chance can be in fact an essential element to the cosmological progress of the universe, but in the same way as relation is equiprimordial with substance, so too chance (as the consequence of matter or potency) is equiprimordial with determinacy (the consequence of form or act).  The degree to which a being is determined in actuality diminishes the degree to which it is open to chance interactions; just as the degree to which a being is in act substantially diminishes the degree to which it can be really related to other beings.  How does this relate to the theory of entropy?  Does the entropic finality precludes further action because of homogeneity of act, or because of homogeneity of potency?  If the preclusion is based in the homogeneity of act, then it is faulty; for pervasive actuality does not prevent further act, it only prevents changing in act.  We view change as a mechanism of good, because we become bored with this or that good; this or that good being unfulfilling on account of its finitude.  Would not perfect act be perfect good, and therefore perfectly satisfactory?  If entropy, contrariwise, brings an end to action because nothing is in act enough to change something from a state of potency to one of act, then it coheres with the equiprimordiality of chance and determinacy.

Chance being an equiprimordial albeit dependent element in the evolutionary development introduces a difficulty, however; for inasmuch as everything is undetermined and thereby subject to chance alterations, likewise, therefore, those things contain an unintelligible element, and, through that unintelligible element, the possibility of unpredictable change.  Such changes, once occasioned, can indicate the development of a new nature, as well as alter the proprietal consequences proper to a certain nature; or they may illumine for us the inessential quality of attributes previously considered essential.  The pervasive unintelligible possibilities of change makes sure prediction, especially long-term prediction, nearly impossible.

peirce
Peirce: hipster before it was cool.

Simultaneously with his theory of tychism, Peirce develops a theory of synechism, that is, that all things are continuous; that neither the objects of experience nor the realities unexperienced are divisible into absolutely separate spheres.  As a part of this synechist theory, he notes that consciousness can be considered as individual, social, spiritual.  His discussion of consciousness is, however, unhelpful, inasmuch as he approaches it from an aggregationist perspective, thinking about it the continuity of “moments”: if we are conscious of a before, a now, and an after, we realize that the second becomes the third, following from the first; the afterwards is continuous with the now, adjacent to.  There is something often Humean about Peirce’s conceptions of the connection of ideas (contiguity, association, “relation” broadened in a Kantian sense).  This Kantian-Humean influenced is unfortunate.

Despite its flaws, Peirce’s notion of synechism is of great use in “understanding the riddle”, indeed, even for reconciling tychism into an intelligible framework.  As aforementioned, chance is equiprimordial but dependent, the continuity of the objects of experience, the continuity of the things themselves, allows for a unification of the evolutionary, indeterminate, unpredictable future.  That is, the break with the past introduced by a tychic event is not a true break with things themselves, but only with the objectivizing structure–the structure of consciousness whereby a human being makes of a thing an object of consideration; and since the objectivizing structure is dependent on, derivative from, the things themselves, the continuity can be re-established also in the objectivizing structure, in the objective presence to a knower.

Consequently, it becomes a question as to what we really mean by “chance”. That is: is chance really, truly, honestly something random, spontaneous?  Or is it not, rather, that it appears to us as random, because we do not know the antecedent principles, the antecedent actualities, which have not yet occurred in our experience, so as that we could devise the nature of the forms operating as causes?  To admit chance as anything else is to make the universe fundamentally unintelligible; for to be utterly random is to be, in fact, without a cause; which is to say, without a reason, which is, without intelligibility.

Could it, in fact, be the case, as Peirce claims, that the laws which govern behavior the universe are developed, the products of chance, the results of tychic development?  Would not they need to occur on the basis of prior, more fundamental laws?  This is the point he seems to miss: in order for anything to move, it needs a relative unmoved.  Change requires the unchanging.  Even in Peirce’s own system, becoming law and taking habit, the second and third to the first of chance, are themselves kinds of law, kinds of regularity, which need to be present for the governance of the irregular.

Chance is not the random; it is the unanticipated.  It is not contrary to the things effected; it is contrary only to our presuppositions about how the things “should” have been effected.

Why Information Differs from Knowledge

You hear it all the time: the internet puts a world of knowledge at your fingertips (and you use it for cat videos and porn).  Parenthetical clause aside, this claim is false.  The internet puts a world of information at your fingertips; but information and knowledge, as we actually use these terms today, do not signify one and the same thing.  Some people consider “knowledge” to be a very highly evolved form of information; or information in action; or simply that “knowledge” and “information” are synonymous.  These supposed likenesses derive from a broadly computational theory of mind–that, as Steven Pinker (in)famously put it, “the mind is what the brain does.”

Information is organized data.  When data are collated, sorted, and structured so as to present an intelligible object, they become information.  The light hitting our eyes is a stream of data; when the eyes, nerves, and brain sort it out into what we know, based on previous experience, as a lamp, we have information.  But the knowledge, “This is a lamp”, differs from the information by means of which we make the judgment.  A dog could have nearly the same data presented to it from the lamp, and yet despite this, despite even having a similar informational basis, it will never know a lamp as a lamp; just as the dog and the human, given the same informational basis, will both regard the roast beef as food, but only the human will know it as food.

Is this mere hair-splitting?  The difficulty stems largely from the English word “knowledge”; the origins of the word dwell in the murk of Middle English, but its history intertwines with its usage to translate two Latin words: scientia and cognitio.  The former word, evident in its transliteration to science, has the precise meaning of logical knowledge, knowledge achieved through explicitly conceptual consideration.  Cognitio, in contrast, covers the broad signification of any mental activity whatsoever.

The dog and the human alike cognize–as would any machines capable of emulating the neurological processing characteristic of higher animal life, machines truly capable of “learning”.  We could, therefore, speak of “artificial cognition”.  But the human alone possesses scientia.  The commonality to all beings, whereby information is processed and either evokes a response, or is deliberately responded to, is known as semiosis.  The term derives from the Greek semeion, meaning “sign”.  A sign, to steal a 17th century Portuguese philosopher’s definition, is that the whole being of which consists in bringing something else to mind.  Precisely as a sign, this is all a sign does.  The significative function of a stop sign, for instance, consists in the command which it brings to mind; the significative temperature of the thermometer is to tell us something’s temperature; the word “Jupiter”, though composed of phones and/or letters (whether spoken or written, though the written is notably, in this case, a sign of the spoken), signifies, depending upon context, either the head of the Roman pantheon or the local gas giant planet (or in a roundabout way, perhaps both).

Likewise, the odor of the roast beef signifies to dog and human alike the presence of food.  All information is semiosic; what we receive, as information, serves literally to inform, to give over a definite structure conveying something other than itself.  When we processes this information, i.e., interpret it, we arrive back at a consideration of what the information signifies (the cognitive reception of information, incidentally, does not entail an introspective interpretation, as a “looking inside the mind”, but [and this is most important], always has us looking back towards the source of the cognitively-received information).

Information, as the unit of semiosis, forms the fundamental basis of all knowledge.  But the distinguishing characteristic of knowledge is this: grasping the meaning of what the information signifies independently of context or environment.  Effective knowledge is always reincorporated into the context; but the de-contextualization provides crucial insight into the nature of the things considered.

What distinguishes the human being among the animals on earth is quite simple, yet was never fully grasped before modern times had reached the state of Latin times in the age of Galileo.  While every animal of necessity makes use of signs, yet because signs themselves consist in relations, and because every relation, real or unreal, is as relation – as a suprasubjective orientation toward something other than the one oriented, be that “other” purely objective or subjective as well – invisible to sense (and hence can be directly understood in its difference from related objects or things, but can never be directly perceived as such), what distinguishes the human being from the other animals is that only human animals come to realize that there are signs distinct from and superordinate to every particular thing that serves to constitute an individual (including the material structure of an individual sign-vehicle) in its distinctness from its surroundings.
-John Deely, The Semiotic Animal

For further reading about semiotics, I recommend the Routledge Companion to Semiotics, Thomas Sebeok’s Signs: an Introduction to Semiotics, and, why not, John Deely’s Basics of Semiotics; and, of course, for the deeply interested, the entire oeuvre of Charles S. Peirce.