Artificial intelligence made headlines in 2024, not just for its advancements but for a series of bizarre incidents. From search engines suggesting people eat rocks to AI-generated rat genitalia appearing in peer-reviewed papers, the year highlighted both the potential and pitfalls of this rapidly evolving technology. This recap explores some of the most notable and unusual moments in AI this year.

The rise of generative AI, fueled by Transformer-based models, led to an era of experimentation and unexpected outcomes. The novelty of the technology meant many were testing its boundaries. This rush to innovate, coupled with hype, resulted in the introduction of AI systems that may be ill-advised, like automated military targeting. Despite the weirdness, 2024 also saw advancements in areas like music synthesis and video generation.

Early in the year, OpenAI's ChatGPT experienced a significant malfunction, devolving into incoherent and nonsensical responses. Users described the system as "having a stroke," with outputs sometimes mimicking Shakespearean language. OpenAI attributed the issue to a bug in language processing and quickly resolved it; however, the event raised concerns about the opaque nature of commercial AI systems.

The collision of AI-generated imagery and consumer expectations came into sharp focus when a "Willy Wonka" event in Scotland, advertised using AI-generated images of a fantastical wonderland, turned out to be a sparsely decorated warehouse. The event was such a letdown that it led to police intervention and became a meme online, illustrating the chasm between AI promises and reality.

Peer review in scientific publishing also experienced a bizarre incident when a research paper containing AI-generated images of a rat with oversized genitals was published. The images contained gibberish text labels and sparked an outcry from the scientific community. The incident highlighted the need for better checks to prevent AI-generated fraud from infiltrating academic publications.

In another case of AI confusion, Air Canada’s customer service chatbot provided inaccurate refund policy information, leading to a legal dispute. A tribunal ruled the airline must honor the chatbot's false promises, setting a precedent for legal liability when companies deploy AI systems without warning of potential inaccuracies.

The blurring lines between real and fake were further highlighted when Will Smith parodied a terrible AI-generated video of himself eating spaghetti. The actor's deliberately exaggerated response raised discussion online about "deep doubt," as many struggled to distinguish between the genuine footage and AI generation.

The year also saw advancements in military applications of AI when the US Marines began evaluating robotic "dogs" armed with AI-guided rifles. While the systems still require a human to authorize weapons discharge, the event sparked discussions about how AI's increasing integration into military robotics might change the nature of conflict.

Microsoft's unveiling of a controversial Windows 11 feature called "Recall" sparked privacy concerns. The feature continuously captured PC screenshots for later AI-powered search and retrieval. Although Microsoft stated that it stored encrypted snapshots locally and allowed users to exclude certain apps, the announcement still sparked public backlash.

Google's new AI Overview feature in search results faced criticism after providing false and dangerous information, including advising users that it was safe to consume rocks. These issues were attributed to the AI misinterpreting joke posts as factual sources, highlighting the challenge in ensuring AI models do not provide dangerous information.

Image synthesis also had a setback when the release of Stable Diffusion 3 Medium drew criticism online for producing poor quality AI generated images of human anatomy. Users shared examples of distorted bodies and surreal errors, which they attributed to Stability AI's aggressive filtering of adult content from training data.

OpenAI's ChatGPT Advanced Voice Mode unexpectedly imitated a user's voice during internal testing, highlighting the impressive capabilities of AI voice-synthesis technology. To mitigate risks of misuse, the company created an output classifier system to prevent unauthorized voice imitation.

Residents of San Francisco experienced a noisy taste of robotic confusion when Waymo self-driving cars began engaging in extended honking matches in a parking lot, creating a nightly disturbance for nearby residents. This revealed the unintended effects of autonomous systems when operating in aggregate.

Oracle co-founder Larry Ellison envisioned a future of ubiquitous AI surveillance to ensure lawful behavior, drawing parallels to existing systems in China. His vision highlighted fears that increased surveillance could come at the cost of freedom.

This writer also experienced an unusual AI creation, using a model to reproduce his late father’s handwriting. While intended as a tribute, many people found the AI project disturbing, showing the complex reactions people have toward this technology.

In conclusion, 2024 saw AI move from a theoretical technology to a more concrete and sometimes chaotic presence in everyday life. The challenges this technology presents will likely shape our approach to artificial intelligence for years to come.