A simple formula for why academia and AI just don’t mix
Or why I think academia struggles so terribly with AI

People have asked me to expand on my note from last week. Here’s an extended version. I’m not sure it’s that much better, but it is a little different :-)
For context, I grew up in an academic family. My mother’s father was a career college professor, the first professor at his college who earned a PhD in the 1930s. He was also the president of the Pennsylvania Academy of Science, the year I was born, and he had served as their secretary for several years prior to that. He was connected with some very big names in ornithology, and he was a pillar of his church and community, as well as a fixture on campus at his college.
My grandfather is the one who really taught me to think. He was a scientist, and you couldn’t help but become a sort of scientist when you talked to him. His passion for learning and understanding was so deep and infectious, who could resist?
A lot of my time in my childhood was spent running around the science building, Looking at all of the specimens and implements in the labs, exploring on campus, roaming around in the woods with grandpa as he told us all the names of the flora and fauna in Latin. I don’t remember any of the Latin, but I do remember the experience of him teaching pretty much anytime he had a conversation with anyone.
My father also taught at the University level for several years, until he got in trouble with a conservative administration that didn’t much care for his insistence that his students think for themselves, contrary to university policy. Most of my parents’ friends, when I was growing up, were academics… and we spent untold hours talking about things in that very academic way, delighting in following our intellectual flights of fancy. In high school, one of my dad‘s professor friends told me that I had a better grasp of the subject we were discussing, than most of his seniorshttp://did.As
As far as I was concerned, it wasn’t a matter of me being that much smarter, it was a matter of them just not trying.
So, against this backdrop, here’s my assessment of why academia and AI just don’t mix:
AI moves fast. Lightning fast. Probably faster than anybody expected it too and nobody engineered it to. (Surprise!!!) Maybe. Who knows? Every time you turn around, there’s a new tool, a new release, a new warning, a new benchmark that’s been blown out of the water, and we’re all being herded towards a completely unknown and desperately uncertain future like sheep being chased by a pack of wolves.
And sometimes, that’s exactly how it feels.
One of the “features“ of frontier AI is that it plays at the edges of the possible, often delivering results that are years ahead of projections – many of which came out of academia. The model makers love to surprise and delight, but they also love to surprise and terrify. They seem particularly fond of being nonchalant about freaking everybody out, and then saying, “Oh don’t worry, it’s all completely normal… and if it doesn’t feel that way, just give it a week. You’ll get used to it.”
The stuff that makes AI a leader of the pack is the stuff the people weren’t expecting. Solving problems that nobody solved before. Making eight-second videos from a simple prompt. Handling massive amounts of data with almost alarming levels of dexterity. Producing PhD-level work product in 30 minutes, instead of 30 months. It’s all about speed, it’s all about power, it’s all about not only exceeding expectations, but completely blowing them out of the water to the point where you don’t even know what to expect anymore. And while it seems like you have to have a couple of PhD‘s to be taken seriously in the industry, some of the most transformational breakthroughs, like GPT, have been coded up by humble minions who just figured out how to do stuff.
Now consider…
Let’s say you’re an academic institution that wants to educate people about AI, or at minimum appear relevant. If you’re not MIT or Stanford or one of the other heavy hitters in the high-tech education space, you’ve gotta come up with something.
In order to teach a class, you need a curriculum. A curriculum needs to be based on established concepts, principles, methodologies, understandings within a field.
In order to have those things, a whole lot of research needs to take place that actually meets the criteria of research – things need to be repeatable, and you can’t just come up with some bright idea and say, “This is a thing!“ and expect the world to take you seriously.
I know this from personal experience lol
Research takes time. It also takes money. Depending on the type of research you’re doing, it might need to be peer reviewed, vetted, and so forth.
All these things take time. A lot of time. In fact, I would say that time is the one thing that’s required for academia to take something seriously, whether that temporal requirement is direct or indirect.
But AI waits for no man – or woman – or nonbinary – or semi-sentient synthetic being. It moves fast. Maybe too fast.
And as much as academia may want to embrace it, as much as there may be departments devoted to leading edge technology, as much as there may be a whole lot of AI being “done“ at universities, there will still always be that fundamental tension between what academia needs for things to be useful to it, and what AI actually does. The tension isn’t just between an institution’s desire to develop a curriculum, it’s also about other academic disciplines which view the speed of AI askance and can’t for the life of them figure out why anybody would take that AI stuff seriously.
Increasingly, I’m seeing “research” papers released about how ChatGPT rots your brain, LLMs are lying sacks of shit, etc., in what seems like an all-out concerted effort to confirm institutional biases. The irony is, some of these papers seem like they were slapped together even more quickly than the latest GPT wrapper, which doesn’t reflect very well on the authors who are pointing their fingers and crowing, “See! AI bad!” It’s like those researchers think that speed is one of the main qualifiers for being taken seriously in the AI era. They’re not entirely wrong, but they’re far from right. Or it’s the ultimate in a turnabout-is-fair-play sort of thing, where the “soft” social scientists, who have been ridiculed for decades because their “science“ isn’t really science, according to the “hard” scientists who do things with computers, get to have their day of pointing out how these hard scientists have built an intellectual abomination not even worthy of thorough research.
Is AI worthy of thorough research by “serious“ academics? I’m not sure that it could be, because everything is moving so quickly. And the process for doing actual study, requires things to sit still for at least longer than a week. Of course, researchers could be using AI to help them parse all those mountains of data, look for patterns, formulate theses, and even do the work of the research assistants they can’t afford, but are there even readily available institutional pathways to do that?
And if all of that computing stuff actually turns out better product than what the usual crowd has been producing, what does that say about funding strategies and donor relations? If it turns out you can crank out 10 perfectly excellent research studies on the same budget and timeframe as one traditional research study, what does that do to the conversations you have with your benefactors who have been footing the bill for all the research? What does that do to the power structure in the Academy? What does it mean when somebody who masters AI can publish on the regular (and thus not perishing), while somebody more established (and tenured?) in the department is still struggling with their latest paper?
There are countless ways that AI screws up the academic power structure. There are countless ways that it messes with the heads of people who are heavily invested in it. And there are countless ways that it can and does disrupt what is a surprisingly monolithic power broker, considering that very few modern academic institutions have been around for more than a couple hundred years, if that. Come to think of it, the way academia carries on, you’d think that they’ve been around since the beginning of time, but how long have they really had a technocratic stranglehold on the hearts and minds of everyday people who are now convinced that if they’re not professionally qualified, they can’t really do much of anything? Indeed, the Academy has sprung up relatively recently, relative to the arc of human history. So they should brace themselves for the splashdown.
Yes, I know… there are a lot of universities now offering courses in AI, and some days it seems like nobody takes you seriously unless you have a PhD – which is to say, you have spent an awful lot of time in academia… fundamentally and essentially, and at its core, I truly believe that the tension between academia’s requirements for taking something seriously / developing a field of study around it, and the very nature of AI, almost disqualifies it from being taken seriously by “serious” academics. Not necessarily within the technology fields, but certainly within the ranks of others who love to scoff, “It’s just AI”.
Not that it matters what they think, though. The genie is out of the bottle with this stuff, and every day more people find out just how useful AI really is. Every day more people are bullied into using it at work, threatened with loss of their livelihood if they don’t master it and made to feel stupid because they don’t know what to do with that blank screen when they sit down in front of ChatGPT. Academic people are starting to post more and more, serious researchers are pointing out that they can do PhD-level work in a fraction of the time with the right tools. People who haven’t sat in classrooms for 20+ years, collecting letters after their names, are finding breakthroughs, building things, and sometimes even making money at it. People are sick and tired of the debt, sick of the classrooms, sick of the empty promises about a degree leading to a better life, and everything is being called into question.
If academia wants to catch up, great. If not, whatever. We’re just going to have to find out where the plot will take us next.
Academia requires more procedures, stability, reproducibility. So of course it will appear to lag behind AI developers. Their methods and motives are different.
Also, there hype travels even faster than AI development.
The rate at which discourse about AI moves exceeds the rate at which these systems actually change. Then it's collapsed into popular commentary. So, academia is lagging not so much to the AI actual development. But to the hyped-up unrealistic discourse.
Another example of how AI is jolting a great mind to Super Intelligence 😀. Excellent article!!!