News:

And we're back!

Main Menu

The AI dooooooom thread

Started by Hamilcar, April 06, 2023, 12:44:43 PM

Previous topic - Next topic

viper37

Maybe AI isn't so threatening after all


Fake it till you unicorn? Builder.ai's Natasha was never AI – just 700 Indian coders behind the curtain


QuoteIf you haven't seen your LinkedIn feed flooded with takes on Builder.ai's collapse, you're following the wrong people. The London-based AI unicorn, once lauded for making app development "as easy as ordering pizza," has imploded spectacularly amid fake AI claims, financial fraud, and a data breach exposing 1.29 terabytes of client secrets.
How did a startup backed by Microsoft, SoftBank, and Qatar's $450 billion sovereign wealth fund become Silicon Valley's latest cautionary tale? Let's break down the collapse.


The scandal erupted on May 31 when Ebern Finance founder Bernhard Engelbrecht posted a bombshell thread on X: "Builder.ai's 'AI' was 700 humans in Delhi pretending to be bots. The only machine learning here was investors learning they'd been duped." The post amassed 2.8 million views in 24 hours, with critics dubbing it "the Theranos of AI" and "WeWork 2.0." 

Beyond the schadenfreude lies tragedy. The 700 Indian engineers, paid $8–15 per hour, now face visa revocations and blacklisting. "They forced us to use fake Western names in client emails," said former developer Arjun Patel. "Now I can't get hired because employers think I'm a bot."
Leaked internal Slack messages reveal engineers were instructed to:
  • Mimic AI response times by delaying code delivery by 12–48 hours
  • Use templated responses like "Natasha is optimising your request" while manually building apps
  • Avoid technical jargon in client calls to maintain the "no-code" illusion
Former employees described the operation as "a call centre with better marketing." One developer confessed: "We'd laugh about 'Natasha' — our inside joke for the graveyard shift."

The money trail: How Builder.ai faked $220M in sales
While the AI deception is staggering, the financial engineering is equally brazen. Documents show Builder.ai and Indian social media giant VerSe Innovation engaged in round-tripping from 2021–2024, billing each other $180 million for nonexistent services. Builder.ai would invoice VerSe $45 million quarterly for "AI licensing," while VerSe billed Builder.ai $44.7 million for "market research"—a laundering scheme that inflated both companies' revenues by 300%.
When lenders demanded proof of its $220M 2024 sales pipeline, an internal audit revealed the truth:
  • Real revenue: $55M (75% from legacy human-services contracts)
  • Projected losses: $99M for 2025
  • Cash burn: $32M/quarter pre-collapse
"It was a Potemkin startup," said a Viola Credit executive. "Even their Mumbai office was a WeWork sublet."
Before the financials unravelled, Builder.ai faced a December 2024 breach exposing:
  • 3.1 million client records (emails, project specs, NDAs)
  • 337K invoices showing manual billing at $18/hour rates
  • Internal memos discussing "AI placebo effects" and "reputation firewalls"
Security researcher Jeremiah Fowler discovered the data on an unsecured AWS bucket. "The folder '/Natasha_AI' contained Excel sheets tracking human coding hours. It was fraud in plain sight." 
So, was any of Builder.ai real?
The evidence says no:
  • AI Claims: 0 verified NLP/ML patents; 100% human-coded output 
  • Financials: 300% revenue overstatement; $180M fake invoices 
  • Leadership: CEO buying Dubai real estate while laying off 1,000 staff 
Builder.ai's collapse has triggered sector-wide panic. Sequoia Capital's AI lead tweeted: "If a 'unicorn' with Microsoft's backing was fake, what does that say about the other 3,000 AI startups?"
Data points to a bubble: 90% of AI startups have no proprietary models and $28B in VC AI funding since 2023 — 40% to companies with under $1M revenue. 
As US prosecutors subpoena financial records, the tech world faces uncomfortable truths about due diligence in the AI gold rush. For now, Builder.ai's legacy is clear: a $1.5B monument to the power of hype over substance.
I don't do meditation.  I drink alcohol to relax, like normal people.

If Microsoft Excel decided to stop working overnight, the world would practically end.


Baron von Schtinkenbutt



Baron von Schtinkenbutt

Quote from: Jacob on June 05, 2025, 12:40:43 PMActual Indians is the one I heard.

Forgot about that, I think Actually Indians is what I was really thinking of.

garbon

https://www.reuters.com/business/media-telecom/disney-universal-sue-image-creator-midjourney-copyright-infringement-2025-06-11/

QuoteDisney, Universal sue image creator Midjourney for copyright infringement

Walt Disney (DIS.N), opens new tab and Comcast's (CMCSA.O), opens new tab Universal filed a copyright lawsuit against Midjourney on Wednesday, calling its popular AI-powered image generator a "bottomless pit of plagiarism" for its use of the studios' best-known characters.
The suit, filed in federal district court in Los Angeles, claims Midjourney pirated the libraries of the two Hollywood studios, making and distributing without permission "innumerable" copies of characters such as Darth Vader from "Star Wars," Elsa from "Frozen," and the Minions from "Despicable Me".

Spokespeople for Midjourney did not immediately respond to a request for comment.
Horacio Gutierrez, Disney's executive vice president and chief legal officer, said in a statement: "We are bullish on the promise of AI technology and optimistic about how it can be used responsibly as a tool to further human creativity, but piracy is piracy, and the fact that it's done by an AI company does not make it any less infringing."

NBCUniversal Executive Vice President and General Counsel Kim Harris said the company was suing to "protect the hard work of all the artists whose work entertains and inspires us and the significant investment we make in our content."
The studios claim the San Francisco company rebuffed their request to stop infringing their copyrighted works or, at a minimum, take technological measures to halt the creation of these AI-generated characters.

Instead, the studios argue, Midjourney continued to release new versions of its AI image service that boast higher quality infringing images.
Midjourney recreates animated images from a typed request, or prompt.
In the suit filed by seven corporate entities at the studios that own or control copyrights for the various Disney and Universal Pictures film units, the studios offered examples of Midjourney animations that include Disney characters, such as Yoda wielding a lightsaber, Bart Simpson riding a skateboard, Marvel's Iron Man soaring above the clouds and Pixar's Buzz Lightyear taking flight.

The image generator also recreated such Universal characters as "How to Train Your Dragon's" dragon, Toothless, the green ogre "Shrek," and Po from "Kung Fu Panda."
"By helping itself to plaintiffs' copyrighted works, and then distributing images (and soon videos) that blatantly incorporate and copy Disney's and Universal's famous characters -- without investing a penny in their creation -- Midjourney is the quintessential copyright free-rider and a bottomless pit of plagiarism," the suit alleges.

...
"I've never been quite sure what the point of a eunuch is, if truth be told. It seems to me they're only men with the useful bits cut off."
I drank because I wanted to drown my sorrows, but now the damned things have learned to swim.

Sheilbh

Quote from: Syt on May 30, 2025, 08:19:22 AM:D

We use AI s lot to transcribe our calls, but review the notes before sending them to all participants. :)
Reminds me of the recent PopBitch item I really enjoyed and shared with everyone at work :lol: This feels like a literal panic dream I've had.
Quote>> AI Goes Popbitch <<
ITV staff - your mic is on!

Following the company-wide adoption of Google's AI product Gemini at ITV, all meetings now have the option of transcriptions and summaries. Handy, right!

Maybe, but it has also led to some awkward office casualties linked to the shiny new feature.

In one online session – which was being transcribed and summarised - a couple of employees stayed on after the meeting had ended and had a good old bitch.

A pithy summary of their bitching session was duly appended to the meeting notes, with a word-for-word transcription also available for anyone who wanted a deep-dive.

QuoteI did not sign up for this Cyberpunk future. :P
Also this one:


Even if these companies are possibly scams (which I wouldn't bet against). This is the bet, right? The massive capital in AI is that it will improve productivity which basically means do for white collar jobs (especially towards the bottom of the labour market) what the industrial revolution through to automation did for blue collar roles. You pay for the AI in your enterprise agreement with Salesforce or Oracle and you get rid of loads of your customer service people.

I feel like my most alarmist take on all this is that if that bet is right, then the impact of this is going to be hugely transformative and socially disruptive - reordering economies and probably resulting in widespread unemployment etc - and if that bet is wrong, then we've probably gone through one of the biggest misallocations of capital in economic history that will be very painful to unwind - quite possibly causing serious economic damage and widespread unemployment etc :ph34r:

I think there's something interesting in the echo of both of these to the more misanthropist side of green politics ("we are the virus"/ the "nature is healing" in the absence of humanity) stuff. It's something I think about more and more but my own politics is becoming far more almost primarily humanist - I think more and more that the most important thing which is being increasingly overlooked in our world is the human - each of us as individual, flawed, problematic humans. And particularly the peripheral - the poor, the vulnerable, the elderly, the unloveable.

Inevitably because it's true for me I also think it's true for the left in general but I think the left really needs to ground itself in (to nick Leo XIV's phrase) a "defence of human dignity, justice and labour".
Let's bomb Russia!

Jacob

As I understand it that's not a real company, but an advertising company trying to generate controversy to raise its profile with a fake campaign.

And if so, it seems to be working.

Josquius

https://futurism.com/ceos-ai-clones-hallucinations

QuoteCEOs Are Creating AI Copies of Themselves That Are Spouting Braindead Hallucinations to Their Confused Underlings

Kind of interesting.
Though honestly it does seem to me ceo speeches are usually very easily replaceable with AI.
██████
██████
██████

Jacob

#414
Thought this was a reasonably interesting article: The crisis in education isn't AI — it's meaning

QuoteThe crisis in education isn't AI — it's meaning
In a world obsessed with productivity and optimization, curiosity, patience, and purpose are quietly eroded

By: Ashima Shukla, Staff Writer

In the age of AI, effort has become optional. As students, we no longer need to flip through textbooks or reread chapters. As one homework app asks, "Why scroll through 100 pages when AI can summarize the most important things in 10?" Across classrooms and countries, education is being reshaped by the insistent buzz of generative AI models. But AI didn't just appear in the classroom; it was invited in by institutions eager to modernize, optimize, and compete.

For instance, the International Artificial Intelligence in Education Society (IAIED), founded in 1997 and now including members from 40 countries, has long positioned itself "at the frontiers of the fields of computer science, education, and psychology." Through organizing major research conferences, publishing a leading journal, and showcasing diverse AI applications, IAIED is critical to the discourse and development of AI in education. It also reflects a broader trend: between 2025 and 2030, the AI industry is expected to grow from $6 billion to over $32 billion USD. 83% of higher education professionals from a diverse range of institutions believe "generative AI will profoundly change higher education in the next three to five years." Silicon Valley giants aren't just innovating these tools. They are also lobbying for their integration into the school system. This is a transformation backed by capital, coded by corporations, and endorsed by institutions desperate to keep up.

And it's working. A McKinsey survey found that 94% of employees and 99% of C-suite leaders are familiar with Gen AI tools, while 47% of employees expect to use AI for nearly one-third of their daily tasks. And universities are listening. Offering courses for students to become prompt engineers and AI ethicists, institutions are preparing them for jobs that didn't exist five years ago but now reflect the priorities of an efficiency obsessed corporate world. But who does this transformation benefit, and at what cost?

This isn't just a pedagogical, labour, or environmental issue, as important as those are. It is something more fundamental to human nature: the erosion of curiosity and critical thinking. As dopamine-fuelled thumbs dance to infinite scrolls, we lose the quiet patience needed to parse meaning from a paragraph. The problem isn't AI's capabilities but our willingness to let corporations dictate the goals of education — and life. When our only objective is maximum productivity and minimal resistance, we strip learning of friction, and therefore, its meaning. After all, if anyone can "generate" a paper, what is the point of writing one?

In this reality increasingly enmeshed with technologies, we've come to expect answers — and dopamine — to be delivered to us immediately. Students begin to internalize that if something isn't fast, it isn't worth doing. However, education should be a practice to cultivate, not a credential to purchase.

As a recent study found, the more confident people are in AI's abilities, the less they rely on their own critical thinking. Similarly, a study on "cognitive offloading" showed that frequent use of AI correlated with weaker problem-solving skills. This suggests that as people grow more accustomed to immediate answers, they lose the memory of mental struggle. Younger students are especially vulnerable, growing up in an environment where boredom is pathologized, curiosity is optional, and learning is gamified. What we are learning is not how to think but how to shortcut.

Even before ChatGPT, researchers warned that students fail to benefit from homework when answers are readily available online. Now, when entire assignments can be completed without thought, Stanford professor Rob Reich asks whether what is at risk is AI displacing the very act of thinking. Writing, after all, is not just a means to communicate but also a way of creating knowledge. The very act of wrestling with an idea, sitting with uncertainty, failing, rephrasing, and trying again, is what shapes the intellect.

And yet, the platforms profiting from this are preaching empowerment. They claim to democratize access, support learning, and save time. But time saved from what exactly? From the very moments that develop intellectual resilience? We have mastered the art of never being bored, and in the process, forgotten how to wonder.

This comes with a heavy psychological toll. As Stanford assistant professor Chris Piech shared, a student broke down in his office, convinced that years of learning to code were now obsolete. The anxiety isn't about incompetence, it is about irrelevance. When we are told our skills are rendered useless, we don't just lose confidence, we lose a sense of purpose. Because, what is learning worth in a world of infinite answers?

We're told to be productive, efficient, optimized. As if the real value in being human comes from what we can produce and how fast we can do it. But the best ideas often come from wandering, from play, from slowness. Real understanding takes time. Sometimes, it takes failing. Sometimes, it takes boredom.

We are drowning in data but are starved for connection. For all the content and knowledge at our fingertips, we are lacking the time to sit alone, to ask good questions, to chase rabbits down holes without knowing where they will lead. In this environment, perhaps the most radical thinking we can learn to do is to slow down. To reimagine education not as a product to be consumed, but as a process of becoming. Perhaps it is time for fewer lectures and more labs, fewer tests and more conversations. Perhaps it is time to value peer collaboration, iterative writing, reflection, and the kinds of assessments that ask students to apply knowledge in solving tasks.

The antidote to the crisis of AI in education is to remember that education is not a product; it is a process. Models like the Four P's of Creative Learning (Projects, Passion, Peers, and Play) offer a blueprint. Instead of treating students as users or consumers, we must see them as co-creators of meaning. How might our relationship with learning change if we were encouraged to fail better, not just succeed faster? The goal shifts from producing measurable outcomes to cultivating a deep curiosity and adaptive thinking.

Learning shouldn't be about acquiring answers. It should be about learning to ask better questions. ChatGPT can help you answer questions, but it cannot teach you how to understand or apply that in the real world. In the face of Big Tech, reclaiming learning as joyful, frustrating, and meaningful is a radical act of resistance. To learn to learn and love it. To recover our passion, we must unlearn the narratives sold to us by billion-dollar companies and build new ones rooted in slowness, struggle, and the sacredness of thought.

Jacob

Another interesting (IMO) article, about the impact of generative AI on the data needed to continue developing AI.

QuoteThe launch of ChatGPT polluted the world forever, like the first atomic weapons tests
94 comment bubble on white

Academics mull the need for the digital equivalent of low-background steel

or artificial intelligence researchers, the launch of OpenAI's ChatGPT on November 30, 2022, changed the world in a way similar to the detonation of the first atomic bomb.

The Trinity test, in New Mexico on July 16, 1945, marked the beginning of the atomic age. One manifestation of that moment was the contamination of metals manufactured after that date – as airborne particulates left over from Trinity and other nuclear weapons permeated the environment.

The poisoned metals interfered with the function of sensitive medical and technical equipment. So until recently, scientists involved in the production of those devices sought metals uncontaminated by background radiation, referred to as low-background steel, low-background lead, and so on.

One source of low-background steel was the German naval fleet that Admiral Ludwig von Reuter scuttled in 1919 to keep the ships from the British.

More about that later.

Shortly after the debut of ChatGPT, academics and technologists started to wonder if the recent explosion in AI models has also created contamination.

Their concern is that AI models are being trained with synthetic data created by AI models. Subsequent generations of AI models may therefore become less and less reliable, a state known as AI model collapse.

In March 2023, John Graham-Cumming, then CTO of Cloudflare and now a board member, registered the web domain lowbackgroundsteel.ai and began posting about various sources of data compiled prior to the 2022 AI explosion, such as the Arctic Code Vault (a snapshot of GitHub repos from 02/02/2020).

The Register asked Graham-Cumming whether he came up with the low-background steel analogy, but he said he didn't recall.

"I knew about low-background steel from reading about it years ago," he responded by email. "And I'd done some machine learning stuff in the early 2000s for [automatic email classification tool] POPFile. It was an analogy that just popped into my head and I liked the idea of a repository of known human-created stuff. Hence the site."

Is collapse a real crisis?

Graham-Cumming isn't sure contaminated AI corpuses is a problem.

"The interesting question is 'Does this matter?'" he asked.

Some AI researchers think it does and that AI model collapse is concerning. The year after ChatGPT's debut several academic papers explored the potential consequences of model collapse or Model Autophagy Disorder (MAD), as one set of authors termed the issue. The Register interviewed one of the authors of those papers, Ilia Shumailov, in early 2024.

Though AI practitioners have argued that model collapse can be mitigated, the extent to which that's true remains a matter of ongoing debate.

Just last week, Apple researchers entered the fray with an analysis of model collapse in large reasoning models (e.g. OpenAI's o1/o3, DeepSeek-R1, Claude 3.7 Sonnet Thinking, and Gemini Thinking), only to have their conclusions challenged by Alex Lawsen, senior program associate with Open Philanthropy, with help from AI model Claude Opus.

Essentially, Lawsen argued that Apple's reasoning evaluation tests, which found reasoning models fail at a certain level of complexity, were flawed because they forced the models to write more tokens than they could accommodate.

In December 2024, academics affiliated with several universities reiterated concerns about model collapse in a paper titled "Legal Aspects of Access to Human-Generated Data and Other Essential Inputs for AI Training."

They contended the world needs sources of clean data, akin to low-background steel, to maintain the function of AI models and to preserve competition.


"I often say that the greatest contribution to nuclear medicine in the world was the German admiral who scuppered the fleet in 1919," Maurice Chiodo, research associate at the Centre for the Study of Existential Risk at the University of Cambridge and one of the co-authors, told The Register. "Because that enabled us to have this almost infinite supply of low-background steel. If it weren't for that, we'd be kind of stuck.

"So the analogy works here because you need something that happened before a certain date. Now here the date is more flexible, let's say 2022. But if you're collecting data before 2022 you're fairly confident that it has minimal, if any, contamination from generative AI. Everything before the date is 'safe, fine, clean,' everything after that is 'dirty.'"

What Chiodo and his co-authors – John Burden, Henning Grosse Ruse-Khan, Lisa Markschies, Dennis Müller, Seán Ó hÉigeartaigh, Rupprecht Podszun, and Herbert Zech – worry about is not so much that models fed on their own output will produce unreliable information, but that access to supplies of clean data will confer a competitive advantage to early market entrants.

With AI model-makers spewing more and more generative AI data on a daily basis, AI startups will find it harder to obtain quality training data, creating a lockout effect that makes their models more susceptible to collapse and reinforces the power of dominant players. That's their theory, anyway.

"So it's not just about the sort of epistemic security of information and what we see is true, but it's what it takes to build a generative AI, a large-range model, so that it produces output that's comprehensible and that's somehow usable," Chiodo said. "You can build a very usable model that lies. You can build quite a useless model that tells the truth."

Rupprecht Podszun, professor of civil and competition law at Heinrich Heine University Düsseldorf and a co-author, said, "If you look at email data or human communication data – which pre-2022 is really data which was typed in by human beings and sort of reflected their style of communication – that's much more useful [for AI training] than getting what a chatbot communicated after 2022."

Podszun said the accuracy of the content matters less than the style and the creativity of the ideas during real human interaction.

Chiodo said everyone participating in generative AI is polluting the data supply for everyone, for model makers who follow and even for current ones.

Cleaning the AI pollution

So how can we clean up the AI environment?

"In terms of policy recommendation, it's difficult," admits Chiodo. "We start by suggesting things like forced labeling of AI content, but even that gets hard because it's very hard to label text and very easy to clean off watermarking."

Labeling pictures and videos becomes complicated when different jurisdictions are involved, Chiodo added. "Anyone can deploy data anywhere on the internet, and so because of this scraping of data, it's very hard to force all operating LLMs to always watermark output that they have," he said.

The paper discusses other policy options like promoting federated learning, by which those holding uncontaminated data might allow third parties to train on that data without providing the data directly. The idea would be to limit the competitive advantage of those with access to unadulterated datasets, so we don't end up with AI model monopolies.

But as Chiodo observes, there are other risks to having a centralized government-maintained store of uncontaminated data.

"You've got privacy and security risks for these vast amounts of data, so what do you keep, what do you not keep, how are you careful about what you keep, how do you keep it secure, how do you keep it politically stable," he said. "You might put it in the hands of some governments who are okay today, but tomorrow they're not."

Podszun argues that competition in the management of uncontaminated data can help mitigate the risks. "That would obviously be something that is a bulwark against political influence, against technical mistakes, against sort of commercial concentration," he said.

"The problem we're identifying with model collapse is that this issue is going to affect the development of AI itself," said Chiodo. "If the government cares about long-term good, productive, competitive development of AI, large-service models, then it should care very much about model collapse and about creating guardrails, regulations, guides for what's going to happen with datasets, how we might keep some datasets clean, how we might grant access to data."

There's not much government regulation of AI in the US to speak of. The UK is also pursuing a light-touch regulatory regime for fear of falling behind rival nations. Europe, with the AI Act, seems more willing to set some ground rules.

"Currently we are in a first phase of regulation where we are shying away a bit from regulation because we think we have to be innovative," Podszun said. "And this is very typical for whatever innovation we come up with. So AI is the big thing, let it go and fine."

But he expects regulators will become more active to prevent a repeat of the inaction that allowed a few platforms to dominate the digital world. The lesson of the digital revolution for AI, he said, is to not wait until it's too late and the market has concentrated.

Chiodo said, "Our concern, and why we're raising this now, is that there's quite a degree of irreversibility. If you've completely contaminated all your datasets, all the data environments, and there'll be several of them, if they're completely contaminated, it's very hard to undo.

"Now, it's not clear to what extent model collapse will be a problem, but if it is a problem, and we've contaminated this data environment, cleaning is going to be prohibitively expensive, probably impossible."

Oexmelin

I now begin all of my classes about the purpose of assignments. It's so easy for students to only think of assignments as outputs. Like - the world doesn't need a summary of the book I have assigned for you to read. I assign it for you to read, so that you think, and wonder, and try to exert your discernment about distinguishing the essential from the accessory. Trying to change the mindset isn't easy.

But it's also quite hard to convey that message when all the rest of the world is spreading quite a different message. Capacity for concentration is now shit. Students struggle to read articles - to say nothing of books.  It takes a lot of time to rebuild what has been lost. It's also a lot more costly. I am able to do that because I have smaller classes; I get to see how students write and think. AI detectors are fallible, and no doubt some students will use it without my noticing. Universities (and schools generally) would need much smaller classes. As always, private institutions will do so, contributing to the pervasive inequality. 
Que le grand cric me croque !

Syt

Author asks ChatGPT to look through her essays on substack and comment on them to help her pick which ones to include in a query to an agent.

https://amandaguinzburg.substack.com/p/diabolus-ex-machina

It turns into a conversation full of apologies for lying, but keeping lying. :D (I've seen some commenters online likening it to a nice-guy bullshitter being called out on their stories, and going "Aw shucks!")

I think AI can be a useful tool, but for critical analysis of online sources maybe not quite there yet. :P
We are born dying, but we are compelled to fancy our chances.
- hbomberguy

Proud owner of 42 Zoupa Points.

crazy canuck

Whenever a young lawyer asks to join my administrative law group, I give that person a reading list of five cases, and I ask them to come back to me after they've read the cases so that we can discuss what they've learned.  There is no trick here. These are the cases that are central to understanding administrative law in Canada.

Up to about 5 years ago everyone read them and I had some very interesting conversations with very bright lawyers, some of whom have gone on to do remarkable things.

In the last 3 to 5 years the responses I've gotten have changed remarkably.  Most of the time the person never comes back because they simply haven't read the cases.  I had one brave soul tell me that they were too busy to read the cases, but they would really like to work on my files anyway.  Most recently people have asked whether they can just give me a written summary of what they have learned rather than coming into my office to talk about it.  I politely decline and say that I'd rather speak to them in person, but of course we all know what's actually happening here.

Thankfully, there are still young lawyers who will put in the work, read the cases, and have a good conversation about what they learned, but the number is declining.

Sheilbh

Not to go into an old person thread - as I think on changes in the last 3-5 years, this isn't to do with ChatGPT but I have several friends who are teachers who have all said there's a huge shift in behaviour since covid. They all say that basically all their colleagues with pre-covid experience agree that something happened in behaviourally.

Sadly one is even quitting - which is a job he has loved, he's had students go on to study his subject at university and return to the school as teacher - but his experience has become that it's really grim right now.
Let's bomb Russia!