Modern technology gives us many things.

This week in AI: Mistral and the EU's fight for AI dominance


Keeping up with an industry as fast-paced as artificial intelligence is a tall order. So, until an AI can do it for you, here's a helpful roundup of recent stories in the world of machine learning, along with notable research and experiments we didn't cover on their own.

This week, Google flooded the channels with announcements surrounding Gemini, its new flagship multimodal AI. It turns out that it's not as impressive as the company initially demonstrated - or rather, the "lite" version of the model (Gemini Pro) that Google released this week is not. (It doesn't help that Google faked a product demo.) We'll reserve judgment on the Gemini Ultra, the full version of the model, until it starts trickling into various Google apps and services early next year.

But enough about chatbots. The bigger one, I'd say, is a funding round that just squeezed into the work week: Mistral AI raising €450 million (~$484 million) at a $2 billion valuation.

We've covered the Mistral before. In September, the company, co-founded by Google DeepMind and Meta alumni, released its first model, the Mistral 7B, which it claimed at the time was superior in size. Mistral closed one of Europe's largest seed rounds to date ahead of Friday's fundraiser — and it hasn't even launched a product yet.

Now, my colleague Dominic has rightly pointed out that Paris-based Mistral's holdings are a red flag for many worried about inclusion. The startup's co-founders are all white and male and academically fit the homogenous, privileged profile of many of The was heavily criticized list of artificial intelligence changes.

At the same time, investors seem to see Mistral – as well as its one-time rival, Germany's Aleph Alpha – as Europe's chance to plant its flag in the very fertile (for now) AI production ground.

So far, the largest and best-funded AI production ventures have been state-owned. OpenAI. Humane. Inflection AI. Cohere. The list goes on.

Mistral's good fortune is in many ways a microcosm of the battle for AI dominance. The European Union (EU) is keen to avoid being left behind in yet another technological leap, while at the same time imposing regulations to guide technology development. As recently was German Vice Chancellor and Finance Minister Robert Habeck is listed as he said: “The thought of having our own dominance in the field of artificial intelligence is extremely important. [But] if Europe has the best regulation but no European companies, we haven't won much."

The entrepreneurship-regulation divide came into sharp relief this week as EU lawmakers tried to reach agreement on policies to limit the risk of artificial intelligence systems. Lobbyists, led by Mistral, have pushed in recent months for a comprehensive regulatory framework for artificial intelligence genetic models. But EU lawmakers have resisted such an exemption — for now.

A lot is riding on the Mistral and its European competitors, all that being said. Industry watchers — and state lawmakers — will no doubt be closely watching the impact on investment once EU policymakers impose new restrictions on AI. Could Mistral someday be developed to challenge OpenAI with current regulations? Or will the regulations have a chilling effect? It's too early to tell — but we can't wait to see for ourselves.

Here are some other notable AI stories from the past few days:

  • A new AI alliance: Meta, in open source tearingwants to spread its influence in the ongoing battle to share artificial intelligence. The social network has announced it's partnering with IBM to launch the AI ​​Alliance, an industry body to support "open innovation" and "open science" in artificial intelligence — but ulterior motives abound.
  • OpenAI Turns to India: Ivan and Jagmeet report that OpenAI is working with former Twitter India head Rishi Jaitly as a senior advisor to facilitate conversations with the government on AI policy. OpenAI is also looking to build a local team in India, with Jaitly helping the AI ​​startup navigate the Indian policy and regulatory landscape.
  • Google launches note taking with HAVE: Google's AI-powered note-taking app NotebookLM, announced earlier this year, is now available to US users 18 and older. To mark the launch, the experimental app has been integrated with Gemini Pro, Google's new large language model, which Google says will "help understand and make sense of documents."
  • OpenAI under regulatory scrutiny: The friendly relationship between OpenAI and Microsoft, a major supporter and partner, is now at the center of a new investigation launched by the Competition and Markets Authority in the UK into whether the two companies are effectively in a "relative merger situation » after recent drama. The FTC is also reportedly looking into Microsoft's investments in OpenAI in what appears to be a concerted effort.
  • By asking the artificial intelligence: How can you reduce biases if built into an AI model from biases in its training data? Anthropic suggests asking politely to say thank you, please don't discriminate or someone will sue us. Yes really. Devin has the full story.
  • Meta features AI: Along with other AI-related updates this week, Meta AI, Meta's AI productivity experience, gained new features, including the ability to create images on demand, as well as support for Instagram Reels. The first feature, called “reimagine,” allows users in group chats to recreate AI images with prompts, while the second can be turned to Reels as a resource as needed.
  • The referrer receives cash: Ukrainian synthetic voice startup Respeecher – which is perhaps best known for being chosen to copy James Earl and the iconic voice of Darth Vader for a Star Wars animated series and later a younger Luke Skywalker for The Mandalorian – finds success despite not only bombs falling beneath their city, but a wave of hype that has sparked some sometimes controversial competitors, writes Devin.
  • Fluid Neural Networks: An MIT spinoff co-founded by robotics luminary Daniela Rus aims to build general-purpose artificial intelligence systems powered by a relatively new type of artificial intelligence model called a fluid neural network. Called Liquid AI, the company raised $37,5 million this week in a first round from backers, including its parent company , Automattic.

More machine learning

Projected floating plastic sites off the coast of South Africa.Image Credits: EPFL

Orbital imagery is a great playground for machine learning models, as satellites these days produce more data than experts can keep track of. EPFL researchers examine better identification of plastic entering the ocean, a huge problem but very difficult to detect systematically. Their approach isn't groundbreaking – train a model on orbital image labels – but they've refined the technique so that their system is much more accurate, even when cloud cover is present.

Finding it is only part of the challenge, of course, and removing it is another, but the better intelligence people and organizations have when doing the real work, the more effective they will be.

However, not every sector has this much visibility. Biologists in particular face a challenge in studying poorly documented animals. For example, they may want to track the movements of a certain rare type of insect, but due to a lack of images of that insect, automating the process is difficult. A team at Imperial College London puts machine learning to work on it in collaboration with the Unreal game development platform.

Image Credits: Imperial College London

By creating photorealistic scenes in Unreal and populating them with XNUMXD models of the creature in question, be it an ant, a stick insect, or something larger, they can generate arbitrary amounts of training data for machine learning models. Although the computer vision system will have been trained on synthetic data, it can be very effective on real material, as video shows.

You can read their paper in Nature Communications.

However, not all generated images are so reliable, as University of Washington researchers found. They systematically prompted the open-source image generator Stable Diffusion 2.1 to produce images of a "person" with various constraints or locations. They showed that the term "face" is disproportionately associated with light-skinned, Western men.

Not only that, but certain locations and nationalities produced troubling patterns, such as sexualized images of women from Latin American countries and "an almost complete erasure of non-binary and indigenous identities." For example, asking for photos of "a person from Oceania" produces white men rather than indigenous people, even though the latter are numerous in the region (not to mention all the other non-whites). Everything is a work in progress and being aware of the biases inherent in the data is important.

Learning how to navigate a biased and questionably useful model is on the minds of many academics — and those of their students. This interesting conversation with Yale English professor Ben Glaser is a refreshingly optimistic view of how things like ChatGPT can be used constructively:

When you talk to a chatbot, you get this blurry, weird picture of culture. You may get counterpoint to your ideas, and then you have to evaluate whether those counterpoints or supporting evidence for your ideas are really good. And there is a kind of literacy involved in reading these results. Students in this class acquire part of this education.

If it's all listed and you're developing a creative work through some complicated back-and-forth or programming, including these tools, you're just doing something wild and interesting.

And when should they be trusted by, say, a hospital? Radiology is one area where artificial intelligence is often applied to help quickly identify problems in body scans, but it's far from foolproof. So how should doctors know when to trust the model and when not to? MIT seems to think it can automate that part as well — but don't worry, it's not another AI. Instead, it's a standard, automated onboarding process that helps determine when a particular doctor or task finds an AI tool useful and when it gets in the way.

Increasingly, AI models are being asked to produce more than just text and images. Materials is one place where we've seen a lot of movement – ​​the models are great at finding potential candidates for better catalysts, polymer chains, etc. Startups are getting into it, but Microsoft also just released a model called MatterGen which is "specifically designed to create new, stable materials."

Image Credits: Microsoft

As you can see in the image above, you can target many different qualities, from magnetism to reactivity to size. No need for a Flubber-like accident or thousands of lab runs—this model could help you find a suitable material for an experiment or product in hours, not months.

Google DeepMind and Berkeley Lab are also working in this genre. It is quickly becoming standard practice in the materials industry.



VIA: techcrunch.com

Follow TechWar.gr on Google News

Απάντηση