$30M French firm FlexAI comes out of stealth to make AI computing more accessible.
Tesla, Intel, Nvidia, and Apple are included on CEO Brijesh Tripathi’s CV.
The Paris-based startup, dubbed FlexAI, has been working covertly since October 2023. However, on Wednesday, it will formally launch with capital of β¬28.5 million ($30 million), and it has teased its first product, an AI training on-demand cloud service.
This is a sizable amount of money for a seed round, which typically indicates a strong founder pedigree, which this one has as well. Brijesh Tripathi, co-founder and CEO of FlexAI, came to prominence as an AI darling after working as a senior design engineer at GPU giant Nvidia. Tripathi then went on to hold senior engineering and architecting positions at Apple, Tesla (where he reported directly to Elon Musk), Zoox (prior to Amazon acquiring the autonomous driving startup), and, most recently, Intel’s AI and super compute platform offshoot, AXG.
FlexAI as applying machine learning models, processing data, and performing algorithms.
According to Tripathi, using any kind of infrastructure in the AI field is difficult and not suitable for beginners or the weak of heart. Tripathi made this statement to insightfullnk. “To use infrastructure, you have to know too much about how to build it.”
On the other hand, the public cloud ecosystem that has developed over the last few decades is a great illustration of how a business has grown out of developers’ need to create apps quickly and with little concern for the back end.
“To write an application as a small developer, all you have to do is spin up an EC2 (Amazon cloud computing instance) and know where it’s being run and what the back end is.”
as applying machine learning models, processing data, and performing algorithms.
According to Tripathi, using any kind of infrastructure in the AI field is difficult and not suitable for beginners or the weak of heart. Tripathi made this statement to Insightfullnk. “To use infrastructure, you have to know too much about how to build it.”
On the other hand, the public cloud ecosystem that has developed over the last few decades is a great illustration of how a business has grown out of developers’ need to create apps quickly and with little concern for the back end.
“To write an application as a small developer, all you have to do is spin up an EC2 (Amazon cloud computing instance) and know where it’s being run and what the back end is.”
You’re done with the Elastic Compute Cloud instance,” Tripathi stated. “AI compute can’t do that right now.”
Within the field of AI, developers are responsible for determining the number of GPUs (graphics processing units) that must be connected across which kind of network. This is done through a software ecosystem that they set up on their own. It is the developer’s responsibility to fix any issues that arise if a GPU, network, or other component fails.
“After two decades, yes, but there’s no reason AI compute can’t see the same benefits, we want to bring AI compute infrastructure to the same level of simplicity that the general purpose cloud has reached,” Tripathi stated. “Our goal is to reach a stage where running
Working with AI workloads doesn’t need you to become an expert in data centers.
Later in the year, Β French firm flexai will introduce
Later in the year, Β French firm flexai will introduce its first commercial product after putting its current version of the product through its paces with a small number of beta users. In essence, it’s a cloud service that links developers to “virtual heterogeneous compute,” allowing them to use GPUs on a consumption basis instead of paying for GPU rentals on an hourly basis while running workloads and deploying AI models across several architectures.
For example, GPUs are essential components in AI development since they are used to run and train large language models (LLMs). One of the leading companies in the GPU market, Nvidia has benefited greatly from the AI revolution that OpenAI and ChatGPT have sparked. In the twelve months that have When OpenAI released an API for ChatGPT in March 2023, developers could include ChatGPT features into their own apps. As a result, Nvidia’s stock price surged from $500 billion to over $2 trillion.
The IT sector is currently producing an abundance of LLMs, while at the same time, demand for GPUs is soaring. However, renting GPUs for ad hoc use cases or smaller operations doesn’t always make sense because they are expensive to operate; for this reason, AWS has been experimenting with time-limited rents for smaller AI projects. However, renting is still renting, which is why provide consumers with on-demand access to AI computing.
“AI with Multicloud”
The premise behindΒ French firm flexai is that most developers aren’t really interested in
primarily whose GPUs or chipsβNvidia, AMD, Intel, Graphcore, or Cerebrasβthey employ. Their primary focus is on developing AI and creating apps while staying within their financial limits.
This is where French firm flexai concept of “universal AI compute” comes into play. Using Nvidia’s CUDA, AMD’s Rocm, or Intel’s Gaudi infrastructure, French firm flexai takes the user’s requirements and assigns them to whichever architecture makes the most sense for that specific task. It also handles all necessary conversions across the various platforms.
According to Tripathi, “this means that the developer is only focused on creating, honing, and utilizing models.” Underneath, we handle everything. We handle all of the failures, recovery, and dependability; you only pay for what you consume.
More than just duplicating the pay-per-use model, Β French firm flexai aims to accelerate for AI what has already occurred in the cloud. It refers to the capacity to utilize various GPU and chip infrastructure advantages in order to go “multicloud.”
Depending on a customer’s priorities, FlexAI will allocate their unique workload. To maximize computational efficiency, a business can configure theΒ French firm flexai to allow it to train and optimize its AI models on a constrained budget. This may mean using Intel for less expensive (but slower) computing, but a developer can choose Nvidia instead if they have a little run that needs to be completed as quickly as possible.
Pingback: Razer fined 100 million dollars by the FTC over COVID-19 luminous"N95" mask allegations.