Bioshields, Organoids, Emotion-Driven Games: Insider Predictions for the Near Future
When asked to predict the next 12 months, our PhD founders are reluctant to make bold claims. It’s when you start discussing the next few years, the near future, that their radical vision emerges, describing worlds and possibilities you may not have read about anywhere else.
We caught up with some of our portfolio founders to learn more about what they’re watching closely. From breakthroughs in AI-driven diagnostics to the potential of organoid intelligence for robotics, the coming years will introduce a new era in human and machine collaboration.
“Neural networks won’t just analyse biological data — they’ll growingly drive the design of diagnostic platforms”
Daniel Todd is the founder of Newcastle-based biotech startup InvenireX (Conception X Cohort 5). His team has developed smart DNA nanostructures that can be programmed to capture target nucleic acids on microfluidic chips, enabling early detection of viruses, pathogens, tumors and other diseases through advanced object-detection neural networks.
Todd says the boundaries of biotechnology are expanding beyond recognition, driven by converging breakthroughs in molecular diagnostics, biosensing and integrated computational systems that are transforming healthcare, agriculture and environmental science.
One of his visions for the near future is a “bio-shield — a global system that continuously monitors human and animal populations for genetic and molecular signals of emerging pathogens”, driven by real-time molecular detection and AI-driven biosensing networks.
“Airports, hospitals and crowded spaces will feature nanofluidic biosensors embedded in air filters, capable of detecting single viral particles at attomolar concentrations,” he says.
“AI will process this data instantly, neutralising threats within 24 hours by triggering targeted vaccine production. No panic. No shutdowns.” It may sound like science fiction — and that’s what his startup is building.
Over the next few months and years, Todd predicts that PCR will be gradually replaced by cheaper, portable tools capable of rapidly detecting diseases at low concentrations.
“Enzyme-independent techniques, powered by cutting-edge nucleic acid chemistry, will bypass thermal cycling, addressing some of the key limitations of this technology,” Todd says.
“Imagine a clinician diagnosing a rare viral strain in minutes using a bedside device, or outbreaks intercepted with precision in the most remote villages. That’s where we’re headed.”
For Todd, in 2025, AI applied to diagnostics will increasingly transition from being an aid to becoming more of an architect.
“Embedded in devices, neural networks won’t just analyse biological data — they’ll growingly drive the design of diagnostic platforms and optimise sensitivity in real time, adapting to threats like evolving pathogens,” he says.
“Artificial Intelligence will start dealing with problems software never could”
Asked to comment on what’s next for AI, Sophia Kalanovska and Charles Higgins of Tromero (Conception X Cohort 5) suggest that we’re headed for an era of disillusionment that will ultimately pave the way for more valuable applications.
“Last year, the world started playing around with generative AI, and it’s been integrated in enough everyday tools — from Apple Intelligence to Notion — that more people are starting to realise its current use cases aren’t massive,” Kalanovska says.
“We’re now at a point where the chatbots are getting good, but unfortunately there’s a limited amount of stuff you can do with a chatbot,” Higgins says.
“AI now needs to be integrated with the rest of the stack, and the first build is never going to be good — you’re going to have AIs embedded in something, and it’s not going to add much value for the first or second iteration. And that’s going to be where people start getting disillusioned before it starts getting better.”
The founders predict that 2025 is going to be the year where people start realising the potential of AI to solve big problems rather than as just a system of information retrieval or to improve user experience.
The founders predict that 2025 will be the year people begin to recognise AI’s potential to solve significant problems rather than viewing it primarily as a tool for information retrieval or to improve user experience.
“Existing AI development tends to resemble more generic software development, with an AI component or two,” they say.
“The value we’re starting to see in AI is its ability to deal with varying scenarios and settings. Software pipelines with hints of AI will struggle to adapt to this, but Autonomous AI systems will be able to cope, particularly if the pace of development in reasoning AI continues. This means that AI could start dealing with problems software never could, and will truly disrupt the way people and businesses currently work.”
As the hype around the first wave of generative AI products peaks, Kalanovska and Higgins predict a mass extinction of AI startups in the coming months, driven by the surge of investment into the sector in recent years and the yet-to-be-realised potential for explosive growth.
Incumbents have caught on, and with AI now easy to integrate natively into most products, they expect it will become increasingly difficult for early-stage startups in the space to compete.
Over the next few months, here’s what the two founders are expecting to see:
- In content generation, a major animation studio will release a fully AI-generated movie by the end of 2025, and we will see a choose-your-own-adventure TV-show “in the style of Black Mirror”, with AI creating new storylines based on the viewer’s choices. “I imagine it won’t reach live-action level — think more like a Family Guy episode,” Higgins says.
- A sustained decline in the quality of literature, including academic papers — “GenAI generates things with distribution, so anything that is surprising or unexpected is actually much harder to get out of a language model,” Kalanovska says — exasperated by the fact that the ease of using these tools will likely continue to outweigh quality concerns in the near future.
- Like all the other big players in the space before them, OpenAI will announce the release of a hardware device this year, in the face of increasing competition.
“Machines will adapt to players’ reactions and improve gaming experiences”
Yann Frachi, founder of game tech startup OVOMIND (Conception X Cohort 4), has more to add on content generation.
With Grand Theft Auto VI set to be released this autumn, he expects AI in gaming to reach a new level of maturity in 2025, off the back of recent remakes that have shown what can happen if previous generations of games are remastered with ultra-realistic AI graphics.
“It’s like you’re looking at a new video game,” Frachi says. “It’s a huge opportunity.”
He predicts that imminent developments will include the integration of natural language processing to allow for infinite natural dialogue between player and game in near-real-time, with the possibility to generate unscripted side quests. He also expects the first emotion-driven game to come out later this year, built on the technology that his team has built.
“We signed with several studios around the world, and I think we will have some OVOMIND compatible games by the end of 2025,” he says.
The startup is developing a model that uses biosignals instead of prompts to dynamically adjust game structure based on players’ emotions and reactions — including how they interact with the controller.
“This way, we can create interactive experiences that feel more human-like,” the founder says.
“In life, when someone reacts to another person, they’re actually adapting to their emotions. If you see someone who’s sad, you notice it and adjust your dialogue while interacting with them. With OVOMIND, machines will do the same. AI will improve the gaming interaction by adapting to the player’s reactions.”
Don’t expect smooth real-time audiovisual generation this year, as it will require significant advances in computational power, but this is likely to become achievable within the next few years.
When that happens, a new kind of entertainment will emerge — one that blurs the boundaries of traditional gameplay, allowing you to talk to a game as if conversing with a real person, rather than typing or using a controller.
“If games become more psychological, relying on dialogue and emotions rather than gaming skills, the market will expand significantly, and this will shape the anatomy of games,” Frachi says. “We’re moving towards shorter, higher-quality experiences at lower prices.”
“Organoid intelligence will move out of simulation and be applied to simple robotic control tasks”
If you were at the Conception X Demo Day last year, you heard it there first — or you may have read about it in PreSeed Now: “AI is dead,” David Yacoub, founder of Omnibio (Cohort 7), proclaimed on stage. His startup is building the future of organoid intelligence (OI).
Yacoub predicts AI’s progress will slow in the near future, due to its inefficiency with data and reliance on finite datasets, including synthetic ones. Instead, he says, research labs across the world have been working on a new technology, originally developed to remove animal models in drug testing, that could also be applied to solve some of the biggest challenges facing AI.
“While it’s a difficult comparison, because one technology is a lot earlier than the other, OI is more data-efficient, energy-efficient and capable of continual learning,” he says.
“It’d be very difficult for AI to compete with OI — unless training times can be massively reduced. Right now, by the time a model’s trained, it’s already out of date. With OI, you could continually train it.”
This, Yacoub says, could allow OI to surpass AI in areas like autonomous robotic control.
Last year, biocomputing startup FinalSpark introduced the possibility of controlling the flight of a remotely-connected brain organoid that behaves like a butterfly in a simulated world. You can demo it here.
“This shows we’re starting to move towards better simulations of controlling agents in virtual spaces,” Yacoub says. “It’s the first step towards simulating the flight path of a real object — and ultimately, towards robotic control using these systems.”
The founder predicts we’re still at least a couple of years away from OI being used in commercial applications — “it’s still a niche subject and most people haven’t heard of it yet,” he says.
There are still foundational research requirements slowing it down, including standardising organoid production and improving multi-electrode arrays, which are essential to stimulate and train organoids to perform increasingly complex tasks.
Yet, he says, in 2025 we will start seeing the proliferation of OI outside of academic circles, and there will be more opportunities for people to interact with the technology.
“I think that in the near future we will start to see it move out of simulation and be applied to very simple robotic control tasks.”
Omnibio’s ultimate goal is to leverage these cultures to develop autonomous robotic control systems capable of independent inference and functioning in dynamic environments, including warehouses.
Subscribe to the Conception X newsletter to follow their journey.