3 AI predictions for 2024, from ai-PULSE
Artificial intelligence took major leaps forward at Station F on November 17, setting trends which are bound to leave their mark on 2024 and beyond. Let’s discover a few…
It’s been a rollercoaster ride for AI over the past two years. Since then, the market has skyrocketed; to give just one of many potential examples, it’s been estimated that generative AI in particular could increase global GDP by 10% over the coming decade (source: JP Morgan).
It’s as such little surprise that AI expertise has grown exponentially of late, especially in the US and Asia. However, as the first edition of ai-PULSE underlined last November, whilst Europe has historically been tech’s late starter, it mustn’t be ruled out of the AI race.
Paris in particular has a thriving AI startup ecosystem, led by stars such as Mistral, the H Company, or by pioneering lab Kyutai (itself launched at ai-PULSE 2023). And let’s not forget that without French talents such as Meta’s Thomas Scialom or Christian Keller, models like Llama would simply not exist.
So where does that leave us today? With an AI sector ripe for maturity, beyond the hype wave which we all know will subside in time. How will it mature? We’ve identified three key vectors as ai-PULSE 2024’s main themes.
The future of AI is increasingly shaped by the need for ever-larger amounts of data and computing power. Few companies have embodied this trend more than OpenAI, the dominant player in LLMs, thanks to its GPT models. But they’re not alone!
Recent experience has confirmed decisively that, more than ever, we need ever-more-powerful GPU clusters to handle all kinds of complex models. So at ai-PULSE, we’ll be looking deeply into the need for powerful computing for AI tasks, and at efforts to make these advanced technologies easier to use and more cost-effective.
After all, many studies suggest that larger models and more data lead to better results. Take this medical example (source):
The same principle applies across other types of LLMs: the more data a model is trained on, the more realistic its images will be, the more precise its predictions and so on.
Which of course begs the question: how far can this curve go? Indeed, as Exponential Views’ Azeem Azhar points out in the same article, by 2027, training a single AI model could cost $100 billion, raising concerns about the economic viability of further scaling. Future AI development may face limits due to data scarcity, rising costs, and the need for innovations in synthetic data and efficiency improvements. So not only does AI compute need to get more and more powerful; it also needs to get more accessible.
ai-PULSE speaker Robert Marino, CEO of Qubit Pharmaceuticals, is well placed to dive into this topic. His startup uses Scaleway’s GPU power to accelerate medical research into new medicines, using a combination of high-performance computing (HPC), quantum computing, and AI. This combination allows research teams to obtain the same test results with 3-5 times less staff and 20 times less tests than using traditional methods, demonstrating how compute power can turbo-boost healthcare when applied correctly. More info on Qubit’s fascinating work here.
Other ai-PULSE speakers who can testify to the importance of large models and major clusters include Florian Douetteau, CEO of Dataiku. This French unicorn owes its explosive growth to an expert leverage of AI’s power to deliver shopping recommendations or business performance forecasts to clients as large as General Electric, Levi’s or Mercedes-Benz.
Fellow French AI unicorn H’s Charles Kantor will also take to the ai-PULSE stage. Kantor’s company, which also relies on Scaleway’s large GPU clusters, made headlines earlier this year when it launched at a valuation of $220 million; a sum most other startups could only dream of.
While large models continue to dominate benchmarks, “smaller” models - either fine-tuned or organized as agents - are proving today that size isn’t everything. They can deliver high performance whilst requiring less powerful hardware, leading to significant cost savings. Compact models also have lower energy consumption, making AI more sustainable.
First and foremost, it’s increasingly clear that not everyone needs a model trained on the entire internet. Some, like any given country’s legal profession, for example, only require models trained on their own specific data subset, for example that country’s court rulings. This naturally leads to smaller, more specialized models.
As Sasha Luccioni et al indicate in their latest white paper, in many applications, utility does not require scale. For example:
In France in particular, the term “frugal AI” is catching on, notably because the country is a green IT pioneer, but also because the government has affirmed that lighter AI models will stand a better chance of receiving financial support and state contracts. Why? Largely because the impact of predominant models like OpenAI’s GPT series is increasingly large… and increasingly hard to measure.
Which is precisely why ai-PULSE speaker Samuel Rincé, Lead Engineer at Algyne, set about creating an impact measurement tool, with the support of ONG Data for Good. The result, Ecologits.ai (above), is an open source Python library that anyone can use to measure the inference impact of many major models, demonstrating for example that using GPT4-o generates 7-25 times more emissions than its previous version. How? As OpenAI doesn’t provide that data, Ecologits takes the closest open source model to GPT4-o, and works out its estimate from there.
Indeed, it’s often said that you can’t improve what you can’t measure, and this is a clear advantage of open source models. Not just in terms of measuring their impact, but also measuring their compliance with key ethical standards, e.g. bias based on attributes like age, gender and ethnicity. This is precisely the role of Giskard.ai, whose CEO Alex Combessie also speaks at ai-PULSE 2024.
Be sure to join us on November 7 to discover more examples of how the future of AI will need models that are compact, specialized, sustainable and ethical.
AI sovereignty is proving to be just as rich a trend as sustainability, if not more so. The rapid dominance of OpenAI and other US providers is now leading to an essential, recurring question: what happens to my data, and that of my users, if I rely on a foreign provider?
Given the increasing opacity of such providers, the need for sovereign, open source AI solutions is becoming ever more pressing. Why, indeed, should a national government want to use AI tools that cannot guarantee the safeguarding of its citizens’ data?
This is why, in an increasingly fragmented world, digital sovereignty and open source are more important today than ever; and why this third ai-PULSE 2024 theme promises to be one of its most fascinating.
As iliad Group Founder Xavier Niel put it at ai-PULSE last year, “Do we want our children using solutions that aren’t created in Europe? No. So how can we have products that fit our needs better?”
Building AI capacity within Europe ensures full control over data. Relying on open source technologies assures greater freedom and flexibility, whilst reducing dependence on foreign providers and enhancing resilience in case of global disruptions. So this is precisely what our speakers will dive into on November 7.
Open source AI is not just better for transparency and independence. Of course, it ensures that enterprises maintain control over their AI infrastructure, without being vulnerable to sudden licensing changes or vendor lock-in.
But open source is also arguably one of the key factors behind AI’s recent explosive growth. Open source projects like TensorFlow, PyTorch, and Hugging Face have provided powerful, accessible tools that have allowed researchers and developers to rapidly prototype, experiment, and iterate. This fast learning curve has been further accelerated by the open source ecosystem’s global community, whose collaborative DNA has enabled many of the shared breakthroughs behind some of today’s most state-of-the-art AI models.
“Open weight” models, such as Meta’s Llama or Mistral 7B, have notably been at the forefront of AI innovation precisely because they are built to allow anyone to scrutinize them, and to let all users fine-tune them to meet specific needs.
And if the performance of some ‘pure’ open source models, such as Falcon 180, are catching up with ‘closed’ ones in terms of speed and accuracy (cf. below), it’s also largely thanks to said models’ openness, which allows for more collaborative innovation.
These are just some of the reasons we can’t wait to hear speakers like Laurent Mazaré, CTO of French AI innovation lab Kyutai, at ai-PULSE 2024. Built on the principles of “open science”, Kyutai’s work is 100% transparent, and as such usable by anyone… (including OpenAI!) Co-founded in part by GAFAM alumni, Kyutai is a great example of cross-atlantic tech innovation, as demonstrated by the recently-released Moshi, the first ever AI chatbot capable of displaying emotion. Which was, of course, trained on Scaleway’s GPU cluster!
Other speakers diving into the importance of building autonomous European AI solutions will be renowned investor Gabriel de Vinzelles of Frst; Eliot Andres, CTO of French AI imagery unicorn PhotoRoom; and Laurent Daudet, co-CEO of LightOn.
We can’t wait to see you there!
Article by Frédéric Bardolle, Lead Product Manager, AI & Constance Morales, Product Marketing Manager, AI - Scaleway
Artificial intelligence took major leaps forward at Station F on November 17, setting trends which are bound to leave their mark on 2024 and beyond. Let’s discover a few…
How can AI remain innovative whilst complying with regulations and standards? French startup and ai-PULSE exhibitor Giskard.AI has the answer...
The first edition of AI conference ai-PULSE was one to be remembered. Here’s a first sweep of the most headline-worthy quotes!