Home Blog

NASA’s Curiosity rover accidentally uncovered pure sulfur crystals on Mars


Cheyenne MacDonald

NASA scientists say pure sulfur has been found on Mars for the first time after the Curiosity rover inadvertently uncovered a cluster of yellow crystals when it drove over a rock. And it looks like the area is filled with it. It’s an unexpected discovery — while minerals containing sulfur have been observed on the Red Planet, elemental sulfur on its own has never been seen there before. “It forms in only a narrow range of conditions that scientists haven’t associated with the history of this location,” according to NASA.

Curiosity cracked open the rock on May 30 while driving in a region known as the Gediz Vallis channel, where similar rocks were seen all around. The channel is thought to have been carved by water and debris flows long ago. “Finding a field of stones made of pure sulfur is like finding an oasis in the desert,” said Ashwin Vasavada, Curiosity’s project scientist. “It shouldn’t be there, so now we have to explain it. Discovering strange and unexpected things is what makes planetary exploration so exciting.”

A rock run over and cracked by the Curiosity rover revealing yellow sulfur crystals


After spotting the yellow crystals, the team later used a camera on Curiosity’s robotic arm to take a closer look. The rover then took a sample from a different rock nearby, as the pieces of the rock it had smashed were too brittle for drilling. Curiosity is equipped with instruments that allow it to analyze the composition of rocks and soil, and NASA says its Alpha Particle X-Ray Spectrometer (APXS) confirmed it had found elemental sulfur.

Read More

Outage caused by CrowdStrike’s disastrous update affected 8.5 million devices


Cheyenne MacDonald

The global outage caused by a faulty update from cybersecurity provider CrowdStrike on Friday affected some 8.5 million Windows devices, Microsoft said in a blog post. The update triggered a blue screen of death, bringing systems used by hospitals, airlines, banks and other major services temporarily to a standstill. Only machines running Windows were affected.

While the issue was mostly resolved by Friday afternoon, Microsoft and CrowdStrike are still dealing with the fallout. In the blog post on Saturday, Microsoft’s VP of Enterprise and OS Security, David Weston, wrote that the company is working with CrowdStrike to “develop a scalable solution that will help Microsoft’s Azure infrastructure accelerate a fix for CrowdStrike’s faulty update.” Microsoft has also called in help from Amazon Web Services (AWS) and Google Cloud Platform (GCP).

CrowdStrike said in its own blog post on Saturday that the update — a sensor configuration update — “was designed to target newly observed, malicious named pipes being used by common C2 frameworks in cyberattacks.” Unfortunately, for devices running Windows 7.11 and above that use CrowdStrike’s Falcon sensor, it instead “triggered a logic error that resulted in an operating system crash.” The total number of devices affected worked out to be “less than one percent of all Windows machines,” according to Weston.

Read More

.45 PARABELLUM BLOODHOUND is a cyberpunk RPG by the developer of VA-11 HALL-A


Mariella Moon

Sukeban Games is working on what it describes as a “cyberpunk active time action” game with a battle system that’s similar to Parasite Eve. In a blog post, Chris of Sukeban has officially announced .45 PARABELLUM BLOODHOUND, featuring a mercenary named Reila Mikazuchi as its protagonist. Enemies can attack you from anywhere while you’re exploring environments in the game, and you’ll have to dodge and wait for an action bar to fill before you can launch a counterattack. The action bar fills at a speed based on your character and weapon stats, so the stronger you get, the faster you can fight back.

While the announcement doesn’t have a in-depth explanation of the game’s plot, Chris describes its story as follows: “You play as Reila Mikazuchi; a washed out mercenary whose glory days are long gone. In a last attempt at grabbing life by the horns she decides to go back to the life, only to realize the real enemy isn’t in front of her gun.”

The indie developer is planning to make seven chapters for the game, and five are already done and playable. It has yet to announce a release date, though, so as not to repeat its “N1RV ANN-A situation.” Sukeban is the developer behind the cyberpunk bartending “booze-em-up” game VA-11 HALL-A, which is set in a post-dystopian world with a corporate-controlled society.

VA-11 HALL-A was wildly successful for an indie title, and Sukeban announced a sequel called N1RV ANN-A: Cyberpunk Bartender Action back in 2018 after it hit 200,000 copies sold. The developer hasn’t released N1RV ANN-A yet despite announcing a 2020 launch date, and it doesn’t look like we’re seeing it anytime soon. Chris said .45 PARABELLUM BLOODHOUND is “significantly ahead in development” and that the developer is dedicating its “full attention to it for the foreseeable future.”

Sukeban has also released the first trailer for .45 PARABELLUM BLOODHOUND, and you can watch it below.

Read More

The workers at Bethesda Game Studios have fully unionized


Mariella Moon

The workers at Bethesda Game Studios have joined the Communications Workers of America (CWA), and they say they’re the first Microsoft video game studio to form a wall-to-wall union. A total of 241 workers have either signed an authorization card or have indicated that they wanted to join a union through an online portal. The “wall-to-wall” nature of their organization means the CWA will be representing workers across job descriptions and divisions — and not just one type — including artists, engineers, programmers and designers. Bethesda is the developer behind Starfield and the Elder Scrolls and Fallout games.

Microsoft has already recognized the union, so workers at the studio’s Maryland office have officially joined CWA Locals 2108, while those in its Texas office have become members of CWA Locals 6215. “We are so excited to announce our union at Bethesda Game Studio and join the movement sweeping across the video game industry,” Mandi Parker, Bethesda Senior System Designer, said. “It is clear that every worker can benefit from bringing democracy into the workplace and securing a protected voice on the job. We’re thrilled to get down to brass tacks and win a fair contract, proving that our unity is a source of real power to positively shape our working conditions, our lives, and the company as a whole.”

Bethesda’s workers join the growing number of unionized personnel in video games. In January 2023, quality assurance workers at ZeniMax Studios, the parent company of Bethesda, banded together to form what was then the largest union in the industry with 300 members. It lost the distinction as the largest union in video games when 600 quality assurance workers at Activision, which is also owned by Microsoft, joined the CWA this year.

Read More

Rivian opens its first Charging Outpost, a crunchy not-gas station near Yosemite


Cheyenne MacDonald

Rivian just opened its first EV charging rest stop 24 miles outside of Yosemite National Park, complete with bathrooms, a lounge with a small library, a water refill station, free coffee and (not free) “make your own” trail mix. Only Rivian owners will be able to make use of the five DC fast chargers at the Rivian Yosemite Charging Outpost, but the other amenities are open to anyone.

The Charging Outpost is located in Groveland, California near the park’s west entrance and takes the place of an abandoned gas station. The shop area will be open from 7AM to 7PM, while the bathrooms and chargers will be available 24/7. It’s the first time Rivian has ventured into this kind of infrastructure, building on its growing network of regular charging sites — several of which are situated near Yosemite. The EV maker has 58 Waypoint charging sites, which support any electric vehicle that uses the standard J1772 plug, around the Yosemite Valley, and a Rivian-only Adventure charging site near the park’s east entrance.

Rivian says it has plans for more Charging Outposts “around national parks and other high-traffic areas across the country.” The first such building was designed with the intention of keeping waste to a minimum, and its retaining wall was made using materials from the old parking lot and sidewalk. It’s fitted with solar panels and has a passive cooling design that’s meant to reduce the need for AC or heating.

Beyond Charging Outposts, Rivian plans to eventually have over 3,500 of its Adventure Network DC fast chargers available in 600 sites across the US and Canada, on top of roughly 10,000 Level 2 chargers that will be open to the public.

Read More

How the Mercedes F1 IT team helped Lewis Hamilton win the British Grand Prix


When Lewis Hamilton crossed the finish line for his emotional win in the 2024 British Grand Prix at Silverstone, there would have been a good few fist pumps from the IT team at his Mercedes-AMG Petronas Formula One team.

While fans focus on the race engineers that helped get the Mercedes car back to the top of the podium after two difficult seasons, the technology team under IT director Michael Taylor certainly played its part too.

“Technology has always played a significant role in Formula One. If you think even back to the 1950s, tech was still there – the stopwatch was the key metric, and that still exists today. Ultimately, the measure of our performance is still based on the stopwatch and that never lies,” he says.

Taylor has been in F1 since 2002 when he joined the Renault team as a network support analyst. He’s been running the IT team at Mercedes since 2020.

“I’m not sure when I entered the sport 22 years ago that I ever envisaged being the number one IT person for the top team in the sport and the top team that had dominated for so long. I don’t think it was ever on the roadmap,” he says.

“My role these days is very much less technical and is predominantly people focused. It’s a fascinating challenge. That’s the thing that keeps me coming back day after day – no two days are ever the same; this morning, I was attending [F1] regulatory meetings.”

Taylor leads a 75-strong team based out of the Mercedes F1 factory in Brackley, a few miles from the Silverstone track – a “relatively lean” group, he says, considering the depth and breadth of the technology it covers.

A typical race week for IT

The British Grand Prix was the last part of a “triple header” – three races on consecutive weekends, which placed a heavy workload on the whole team. For those in IT, it was a particularly taxing time, as head of IT operations and service management Steve Riley explains to Computer Weekly during a behind-the-scenes visit to the Brackley factory.

“A typical race week starts either on the Sunday before the race or the Monday of race week. We have two IT racks that travel around the world. They’ll either go in trucks or in planes with the rest of the kit that travels with the team. Now we’re in a triple-header and the trucks have travelled from one race on to another and on to [Silverstone]. So, we didn’t get the race trucks until Tuesday, and they were built overnight to be ready for Wednesday,” he says.

“The first thing is to get the infrastructure up and running. We effectively have a mobile datacentre that we take with us in those two IT racks. It’s got compute, network, storage – all of the typical things that you’d expect to be present in any IT environment. And then it’s about integrating that into the rest of the garage [at the race track] making sure all the network and the Wi-Fi is available in the right place and carrying out all the pre-event tests that we do from race to race.”

Lewis Hamilton celebrates his 2024 British Grand Prix win

Riley describes each racetrack as a “multi-space environment” with all the additional demands that implies.

“We have our garages, but the pit wall is another, motorhome is another, the engineering offices are another. Wherever we go, we make sure all of that can talk to each other. Once we get to Thursday, it turns more into supporting the engineers when they arrive, then the rest of the Formula One team as they arrive into the circuit, making sure they’re up and running, that telemetry is streaming as it should be, data is available in the right places, before running [the car] on Friday,” he says.

“It’s an interesting thing to do three times over three weeks, when Formula One introduced the triple headers. But that’s what we sign up for, it’s an interesting challenge.”

Beating the stopwatch

The other key area where the IT team supports Hamilton and his fellow driver, George Russell, comes from Taylor’s key metric – the stopwatch.

“How can we solve age-old problems and challenges, but in new and inventive ways that ultimately reduce time and increase the number of cycles we can complete within a given period? The innovation in the tech industry is huge at the moment,” he says.

“It’s about making sure the organisation is ready and prepared for whatever comes next and being able to embrace it at the right time”

Michael Taylor, Mercedes-AMG Petronas Formula One

“It’s about making sure the organisation is ready and prepared for whatever comes next and being able to embrace it at the right time and to be able to maximise the value and return. Our focus really is around operational efficiency – how do we save people time? How do we use technology to reduce the manual repetitive tasks that people have to do? How can we make technology work more effectively to free up time to then hopefully reinvest in deriving a competitive advantage for the team?”

Taylor acknowledges that artificial intelligence (AI) is going to play a “very prominent part” in F1, but he’s cagey when asked to expand on its use. Most F1 teams closely protect their innovations to prevent competitors finding out how they plan to gain that extra few milliseconds around the track.

That’s especially the case for a team competing for the podium in every race. When Computer Weekly recently visited the Italian factory of the Visa Cash App RB (VCARB) F1 team – a mid-table team racing for different objectives – its executives openly discussed the use of AI and its potential for enhancing performance.

Approach to AI

But Riley is happy to explain the principles underlying his approach when it comes to AI – and his caution.

“There are lots of vendors now that have some form of AI offering which a year or two ago probably would have fallen under the banner of data science or more advanced analytics. The boundaries have been blurred a little about what artificial intelligence really is,” he says.

“But there’s certainly interest in advancements there that could serve organisationally. [Generative AI can] save people time, those repetitive tasks. There’s the ability to surface actionable insight from data within manufacturing systems and ERP, people mining [data] themselves. That’s ultimately where generative AI is beneficial. Co-pilots, code development, paired programming – there’s lots of lots of benefits there.”

He points out that those benefits are human-oriented and about saving engineering time in a factory environment – the biggest challenge in F1 is harder to tackle.

“There’s a deterministic nature to the problem that we’re trying to solve, which is: what is the optimum car configuration, on track, at a particular point in time, taking into account all of the environmental variables as well as the climatic conditions? That’s very tough to do with generative AI. If you ask generative AI the same question five times over a 10-minute period, you’ll get five different answers,” Taylor says.

Millisecond improvements

Mercedes, like all F1 teams, makes extensive of use of computational fluid dynamics (CFD), CAD/CAM systems and digital twins to model and simulate the design and engineering changes that are being made to improve the race car.

Designers and engineers are constantly trying to “mimic the aerodynamic performance of the car in a virtual world,” Taylor says.

“A lot of it is focused around those millisecond improvements. What’s the optimum front wing to have, combined with the rear wing and then all of the different setup and configurations across the car to get the driver comfortable and delivering repeatable lap times, either a single lap for qualifying or over a longer stint for a race,” he says.

“That involves a huge amount of data, data analytics, engineering, knowledge, experience and skills and lots of supporting factory-based skills to evolve that continuous lifecycle feedback loop and iterate and improve.”

The team uses a mix of in-house developed software and commercial applications, including SAP’s S/4HANA enterprise resource planning (ERP) system. One of its key software partners is remote connectivity specialist TeamViewer, whose Tensor product sits right in the heart of the trackside garage at every race.

Hamilton checks his in-pit screen for information from his engineering team

When the drivers are sitting in the car between stints on track – typically during practice or qualifying – it’s essential to share all the key telemetry data and competitor performance information to help them improve on their next run.

This is delivered via a screen that drops down from above the car and allows the driver access to all the relevant data, and connects them to the race support room in Brackley, where engineers can share and discuss whatever information Hamilton and Russell need, including video streams and in-car communications between rival drivers and their teams.

“When the car’s in the garage, you typically have two screens that get placed on the front of the car that give the driver a view of telemetry, of weather data, video replays, competitor analysis, strategy, run plans,” explains Riley.

“We use TeamViewer to control those two screens. The performance engineers will be controlling what the driver has access to see on the screens. That can be in the middle of a qualifying session where you come into the garage to get fresh tyres or more fuel, and it can be one minute or 30 seconds even in terms of turnaround. That time is incredibly important for us to make sure we convey as much information as clearly as possible.”

Seven billion data points

During a race weekend, the cars will generate over seven billion data points from hundreds of on-board sensors. Taylor describes the car as effectively an edge computing device, transferring data back to the team’s cloud. And that data is at the heart of the strategy that can make the difference between winning and losing in every race.

“When you’re talking about the strategy system, the bulk of it is in-house written. It takes all sorts of different data feeds from our car, but also from our competitors’ cars, as well as taking GPS, timing data, weather data, and combining all that together. Our focus from a tech perspective is how can we process all of those different data points in a manner that enables the strategist sat on the pit wall or in the race support room,” says Taylor.

“How can we enable them to make the right decision in the fastest, easiest, least confrontational, frictionless way possible? Technology plays a big part in that because it’s great if you can crunch all of these different data points and datasets, but if you give the answer four laps later in the race, you’ve already missed your opportunity.

“So the ability to do all of that processing within the completion of a lap is really important – you may need to make a pit call, and you’ve got five seconds to make that call. If not, the driver is past the pit entrance and the opportunity is lost.

“It’s about using the data that you’re able to generate and visualise and view and make decisions on in the most meaningful way to hopefully optimise the result.”

Taylor may be an F1 veteran now, but he previously worked in IT for a local authority. Similarly, Riley’s background before joining Mercedes in 2016 was as a service manager in media company NBCUniversal.

Mercedes has a graduate programme and offers apprenticeships to bring in young talent, but for any petrolheads working in IT who would love the chance to experience a career in Formula One, the opportunities are there.

“I get lots of questions from people – how can I get a job in Formula One?” says Riley “They might be an infrastructure engineer or a software engineer or whatever. I tell them, just go for the job. We don’t exclusively hire people from the F1 industry. Almost everyone used to do something else before they started working here. Just go for it.”

Who knows, you might be fist pumping alongside the next Lewis Hamilton one day.

Read More

Accessibility: A Computer Weekly Downtime Upload podcast



Listen to this podcast

CIO, Suleyman Gokyigit discusses his experiences of using screen readers to support blind and visually impaired users navigate websites and apps

Suleyman Gokyigit is CIO at Fire, a US organisation which defends the rights of free speech. He also works for crowd-testing company, Applause, where he is involved in accessibility testing.

Among the services Applause provides is the ability for companies that have developed new software or websites to check if  they are meeting the design principles needed for accessibility. “Even if it’s not an accessibility feature, companies  want to make sure somebody who is blind can actually use the software or visit the website,” he says.

His work with Applause involves testing and performing a series of tasks on the website or software. His actions are recorded. “This could be something like going to this web page, logging and then creating a new order.  I’m providing feedback the whole time, which allows these companies to understand,” he says.

As Gokyigit point out, a person with a screen reader does not use the mouse. “Software and websites should not be designed in a way where something has to be clicked on with the mouse. You have to be able to use a keyboard to move around and there should be ways of providing all the functionality with a keyboard,” he says.

Discussing his own personal journey with accessibility in software, Gokyigit says: “It has changed a lot over the years. I am completely blind and I use screen reading software on the computer.” When he was in elementary school, he used an Apple 2 and the only accessibility software available was a program that provided compatibility with just a handful of very specific applications. “It did truly basic things like typing. That’s how I learned to type originally,” he says.

While the last three decades has seen vast improvements in accessibility since he first started using a PC at the age of 12 in 1991, Gokyigit is looking forward to Apple AI and how it promises to improve accessibility. “Apple is one of the first companies to start incorporating – at least theoretically – into iOS 18 and the new Mac OS the ability to do things like web page summaries. Summarising a web page is a lot more complicated than summarising a  document because a document is purely content based, whereas a website also has actions that need to be performed,” he says.

Beyond the current use of AI, Gokyigit believes that challenges with software usability reaches beyond making software usable for people with disabilities. “The ability to have an actual conversation or being able to control your computer by speaking to it makes a lot of sense,” he says.

Read More

There’s a simple answer to the AI bias conundrum: More diversity




Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

As we approach the two-year anniversary of ChatGPT and the subsequent “Cambrian explosion” of generative AI applications and tools, it has become apparent that two things can be true at once: The potential for this technology to positively reshape our lives is undeniable, as are the risks of pervasive bias that permeate these models.

In less than two years, AI has gone from supporting everyday tasks like hailing rideshares and suggesting online purchases, to being judge and jury on incredibly meaningful activities like arbitrating insurance, housing, credit and welfare claims. One could argue that well-known but oft neglected bias in these models was either annoying or humorous when they recommended glue to make cheese stick to pizza, but that bias becomes indefensible when these models are the gatekeepers for the services that influence our very livelihoods. 

So, how can we proactively mitigate AI bias and create less harmful models if the data we train them on is inherently biased? Is it even possible when those who create the models lack the awareness to recognize bias and unintended consequences in all its nuanced forms?

The answer: more women, more minorities, more seniors and more diversity in AI talent.

Early education and exposure

More diversity in AI shouldn’t be a radical or divisive conversation, but in the 30-plus years I’ve spent in STEM, I’ve always been a minority. While the innovation and evolution of the space in that time has been astronomical, the same can’t be said about the diversity of our workforce, particularly across data and analytics. 

In fact, the World Economic Forum reported women make up less than a third (29%) of all STEM workers, despite making up nearly half (49%) of total employment in non-STEM careers. According to the U.S. Department of Labor Statistics, black professionals in math and computer science account for only 9%. These woeful statistics have remained relatively flat for 20 years and one that degrades to a meager 12% for women as you narrow the scope from entry level positions to the C-suite.

The reality is, we need comprehensive strategies that make STEM more attractive to women and minorities, and this starts in the classroom as early as elementary school. I remember watching a video that the toy company Mattel shared of first or second graders who were given a table of toys to play with. Overwhelmingly, girls chose traditional ‘girl toys,’ such as a doll or ballerina, but ignored other toys, like a race car, as those were for boys. The girls were then shown a video of Ewy Rosqvist, the first woman to win the Argentinian Touring Car Grand Prix, and the girls’ outlook completely changed. 

It’s a lesson that representation shapes perception and a reminder that we need to be much more intentional about the subtle messages we give young girls around STEM. We must ensure equal paths for exploration and exposure, both in regular curriculum and through non-profit partners like Data Science for All or the Mark Cuban Foundation’s AI bootcamps. We must also celebrate and amplify the women role models who continue to boldly pioneer this space — like CEO AMD Lisa Su, OpenAI CTO Mira Murati or Joy Buolamwini, who founded The Algorithmic Justice League — so girls can see in STEM it isn’t just men behind the wheel. 

Data and AI will be the bedrock of nearly every job of the future, from athletes to astronauts, fashion designers to filmmakers. We need to close inequities that limit access to STEM education for minorities and we need to show girls that an education in STEM is literally a doorway to a career in anything. 

To mitigate bias, we must first recognize it

Bias infects AI in two prominent ways: Through the vast data sets models are trained on and through the personal logic or judgements of the people who construct them. To truly mitigate this bias, we must first understand and acknowledge its existence and assume that all data is biased and that people’s unconscious bias plays a role. 

Look no further than some of the most popular and widely used image generators like MidJourney, DALL-E, and Stable Diffusion. When reporters at the The Washington Post prompted these models to depict a ‘beautiful woman,’ the results showed a staggering lack of representation in body types, cultural features and skin tones. Feminine beauty, according to these tools, was overwhelmingly young and European — thin and white.

Just 2% of the images had visible signs of aging and only 9% had dark skin tones. One line from the article was particularly jarring: “However bias originates, The Post’s analysis found that popular image tools struggle to render realistic images of women outside the western ideal.” Further, university researchers have found that ethnic dialect can lead to “covert bias” in identifying a person’s intellect or recommending death sentences.

But what if bias is more subtle? In the late 80s, I started my career as a business system specialist in Zurich, Switzerland. At that time, as a married woman, I wasn’t legally allowed to have my own bank account, even if I was the primary household earner. If a model is trained on vast troves of women’s historical credit data, there’s a point in some geographies where it simply doesn’t exist. Overlap this with the months or even years some women are away from the workforce for maternity leave or childcare responsibilities — how are developers aware of those potential discrepancies and how do they compensate for those gaps in employment or credit history? Synthetic data enabled by gen AI may be one way to address this, but only if model builders and data professionals have the awareness to consider these problems.

That’s why it’s imperative that a diverse representation of women not only have a seat at the AI table, but an active voice to construct, train and oversee these models. This simply can’t be left to happenstance or the ethical and moral standards of a few select technologists who historically have represented only a sliver of the richer global population.  

More diversity: A no-brainer

Given the rapid race for profits and the tendrils of bias rooted in our digital libraries and lived experiences, it’s unlikely we’ll ever fully vanquish it from our AI innovation. But that can’t mean inaction or ignorance is acceptable. More diversity in STEM and more diversity of talent intimately involved in the AI process will undoubtedly mean more accurate, inclusive models — and that’s something we will all benefit from.

Cindi Howson is chief data strategy officer at ThoughtSpot and a former Gartner Research VP.


Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Read More

Nintendo revives Famicom Detective Club in year of mystery | Kaser Focus


GamesBeat is excited to partner with Lil Snack to have customized games just for our audience! We know as gamers ourselves, this is an exciting way to engage through play with the GamesBeat content you have already come to love. Start playing games here

As both a mystery game fan and a Person in the Games Industry, I would like to hereby dub 2024 the Year of the Mystery Game. We’ve gotten a new Nancy Drew game, two Ace Attorney collections (which I’m hoping against all odds means we might get a new game at some point in the future), a Golden Idol sequel and now Nintendo has announced it’s making a new Famicom Detective Club game. Specifically, the game is called Emio — The Smiling Man: Famicom Detective Club, and it launches on August 29.

The Emio reveal was especially enjoyable for me, as I hadn’t played the Famicom Detective Club games before Nintendo remastered the duology for Switch in 2021. The fact that they managed to revive such an old franchise to the point where they’re making an entirely new game — not to mention giving the new game some focused viral marketing — makes me very hopeful for not only the future of mystery adventure games but older franchises generally.

In other news, it feels like Xbox and Apex players are still reeling from last week’s announced price hikes. Paramount’s Halo series has been canceled, meaning I’m gonna go back to spending the bulk of my time on Paramount+ watching Perry Mason reruns. The Fallout Prime series, on the other hand, cleaned up at the Emmy nominations with 16 chances at the statues.

In personal news, this week has seen a wealth of new releases, many of which I’ve been wanting to play for sometime. I’ll likely start with Flintlock: Siege of Dawn as I played the demo during Steam Next Fest and liked what I saw. Then I can move on to Kunitsu-Gami and possibly Dungeons of Hinterberg, if there’s time. I’ll also likely be following along with EVO on Twitch — it’s always a treat watching people who are good at games play them!

Lil Snack & GamesBeat

GamesBeat is excited to partner with Lil Snack to have customized games just for our audience! We know as gamers ourselves, this is an exciting way to engage through play with the GamesBeat content you have already come to love. Start playing games now!

What to play this week

What’s new:

  • Flintlock: The Siege of Dawn
  • Kunitsu-Gami: Path of the Goddess
  • EA Sports College Football 25
  • Dungeons of Hinterberg
  • Nintendo World Championships: NES Edition
  • Magical Delicacy
  • Evil V Evil
  • Nobody Wants To Die
  • Gestalt: Steam & Cinder
  • Schim
  • Bō: Path of the Teal Lotus
  • Demon Slayer – Kimetsu no Yaiba – Sweep the Board!

New on subscription services:

  • Flock (Xbox Game Pass)
  • Remnant II (PlayStation Plus Extra + Premium)
  • Crisis Core – Final Fantasy VII 00 – Reunion (PlayStation Plus Extra + Premium)
  • Mount & Blade II: Bannerlord (PlayStation Plus Extra + Premium)
  • The Jackbox Party Pack 9 (PlayStation Plus Extra + Premium)
  • Pathfinder: Wrath of the Righteous (PlayStation Plus Extra + Premium)
  • No More Heroes 3 (PlayStation Plus Extra + Premium)
  • Travis Strikes Again: No More Heroes (PlayStation Plus Extra + Premium)
  • Deadcraft (PlayStation Plus Extra + Premium)
  • Steep (PlayStation Plus Extra + Premium)
  • Job Simulator (PlayStation Plus Premium)
  • Summoner (PlayStation Plus Premium)
  • Ratchet and Clank Size Matters (PlayStation Plus Premium)
  • Jeanne d’Arc (PlayStation Plus Premium)
  • Youtubers Life 2 (Prime Gaming)
  • Maneater (Prime Gaming)
  • Baldur’s Gate: Enhanced Edition (Prime Gaming)
  • The Crust (GeForce Now)
  • Norland (GeForce Now)
  • Content Warning (GeForce Now)
  • Crime Boss: Rockay City (GeForce Now)

GB Daily

Stay in the know! Get the latest news in your inbox daily

By subscribing, you agree to VentureBeat’s Terms of Service.

Thanks for subscribing. Check out more VB newsletters here.

An error occured.

Read More

OpenAI, Nvidia, and Hugging Face unveil small AI models: GPT-4o Mini, Mistral-Nemo, and SmolLM lead industry shift


Credit: VentureBeat made with Midjourney

Credit: VentureBeat made with Midjourney

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

Three major players in artificial intelligence unveiled compact language models this week, signaling a major shift in the AI industry. Hugging Face, Nvidia in partnership with Mistral AI, and OpenAI have each released small language models (SLMs) that promise to democratize access to advanced natural language processing capabilities. This trend marks a significant departure from the race for ever-larger neural networks and could redefine how businesses implement AI solutions.

The new models—SmolLM, Mistral-Nemo, and GPT-4o Mini—each represent different approaches to creating more accessible AI, but they all share a common goal: bringing powerful language processing capabilities to a wider range of devices and applications.

Small wonders: How compact AI models are changing edge computing

Hugging Face’s SmolLM stands out as perhaps the most radical of the three. Designed to run directly on mobile devices, SmolLM comes in three sizes: 135 million, 360 million, and 1.7 billion parameters. This range pushes AI processing to the edge, addressing critical issues of data privacy and latency.

The implications of SmolLM extend far beyond mere efficiency gains. By bringing AI capabilities directly to edge devices, it paves the way for a new generation of applications that can operate with minimal latency and maximum privacy. This could fundamentally alter the landscape of mobile computing, enabling sophisticated AI-driven features that were previously impractical due to connectivity or privacy constraints.

Nvidia and Mistral AI’s collaboration has produced Mistral-Nemo, a 12-billion parameter model with an impressive 128,000 token context window. Released under the Apache 2.0 license, Mistral-Nemo targets desktop computers, positioning itself as a middle ground between massive cloud models and ultra-compact mobile AI.

Mistral-Nemo’s approach could be particularly disruptive in the enterprise space. By leveraging consumer-grade hardware, it has the potential to democratize access to sophisticated AI capabilities that were once the exclusive domain of tech giants and well-funded research institutions. This could lead to a proliferation of AI-powered applications across various industries, from enhanced customer service to more sophisticated data analysis tools.

The price is right: OpenAI’s cost-efficient GPT-4o Mini breaks new ground

OpenAI has entered the SLM arena with GPT-4o Mini, touted as the most cost-efficient small model on the market. Priced at just 15 cents per million tokens for input and 60 cents per million for output, GPT-4o Mini significantly reduces the financial barriers to AI integration.

OpenAI’s pricing strategy with GPT-4o Mini could catalyze a new wave of AI-driven innovation, particularly among startups and small businesses. By dramatically reducing the cost of AI integration, OpenAI is effectively lowering the barriers to entry for AI-powered solutions. This could lead to a surge in AI adoption across various sectors, potentially accelerating the pace of technological innovation and disruption in multiple industries.

This shift towards smaller models reflects a broader trend in the AI community. As the initial excitement over massive language models gives way to practical considerations, researchers and developers increasingly focus on efficiency, accessibility, and specialized applications.

The focus on SLMs represents a maturation of the AI field, shifting from a preoccupation with raw capabilities to a more nuanced understanding of real-world applicability. This evolution could lead to more targeted and efficient AI solutions, optimized for specific tasks and industries rather than trying to be all-encompassing.

The trend towards SLMs also aligns with growing concerns about the environmental impact of AI. Smaller models require less energy to train and run, potentially reducing the carbon footprint of AI technologies. As companies face increasing pressure to adopt sustainable practices, this aspect of SLMs could become a significant selling point.

The environmental implications of this shift towards SLMs could be profound. As AI becomes increasingly ubiquitous, the cumulative energy savings from widespread adoption of more efficient models could be substantial. This aligns with broader trends towards sustainable technology and could position AI as a leader in green innovation rather than a contributor to climate change.

However, the rise of SLMs is not without challenges. As AI becomes more ubiquitous, issues of bias, accountability, and ethical use become even more pressing. The democratization of AI through SLMs could potentially amplify existing biases or create new ethical dilemmas if not carefully managed. It will be crucial for developers and users of these technologies to prioritize ethical considerations alongside technical capabilities.

Moreover, while smaller models offer advantages in terms of efficiency and accessibility, they may not match the raw capabilities of their larger counterparts in all tasks. This suggests a future AI landscape characterized by a diversity of model sizes and specializations, rather than a one-size-fits-all approach. The key will be finding the right balance between model size, performance, and specific application requirements.

Despite these challenges, the shift towards SLMs represents a significant evolution in the AI landscape. As these models continue to improve and proliferate, we may see a new era of AI-enabled devices and applications, bringing the benefits of artificial intelligence to a broader range of users and use cases.

For businesses and technical decision-makers, the message is clear: the future of AI is not just about raw power, but about smart, efficient solutions that can be easily integrated into existing systems. As the AI revolution scales down in size, its impact on businesses and society may only grow larger.

VB Daily

Stay in the know! Get the latest news in your inbox daily

By subscribing, you agree to VentureBeat’s Terms of Service.

Thanks for subscribing. Check out more VB newsletters here.

An error occured.

Read More