DOJ charges Russian nationals with laundering bitcoin in 2011 Mt. Gox hack


The US Department of Justice announced today that it charged two Russian nationals for crimes related to the 2011 hacking of Mt. Gox, the now-defunct crypto exchange that was one of the world’s largest at the time. Alexey Bilyuchenko Aleksandr Verner are accused in the Southern District of New York (SDNY) of laundering about 647,000 bitcoins connected to the heist. In addition, Bilyuchenko faces separate charges in the Northern District of California (NDCA) related to running the infamous Russian crypto exchange BTC-e.

The pair are being charged in SDNY for conspiracy to commit money laundering. Meanwhile, the NDCA charges are for conspiracy to commit money laundering operating an unlicensed money services business. The SDNY charges carry a maximum sentencing of 20 years for each defendant, while Bilyuchenko faces a maximum of 25 years in prison in the NDCA indictment.

The DOJ says Bilyuchenko, Verner co-conspirators gained access to the server storing Mt. Gox’s crypto wallets in or about September 2011. Once they infiltrated the servers, the pair their partners allegedly initiated the transfer of customers’ bitcoins to accounts they controlled. In addition, they’re accused of laundering the stolen bitcoins to accounts on other crypto exchanges also controlled by the group.

The conspirators allegedly negotiated entered into a fraudulent “advertising contract” with a New York bitcoin brokerage service, a relationship they used to request regular transfers to “various offshore bank accounts, including in the names of shell corporations, controlled by Bilyuchenko, Verner, their co-conspirators.” The DOJ says the group transferred over $6.6 million from March 2012 to April 2013.

“This announcement marks an important milestone in two major cryptocurrency investigations,” said US Assistant Attorney General Kenneth A. Polite, Jr. “As alleged in the indictments, starting in 2011, Bilyuchenko Verner stole a massive amount of cryptocurrency from Mt. Gox, contributing to the exchange’s ultimate insolvency. Armed with the ill-gotten gains from Mt. Gox, Bilyuchenko allegedly went on to help set up the notorious BTC-e virtual currency exchange, which laundered funds for cyber criminals worldwide. These indictments highlight the department’s unwavering commitment to bring to justice bad actors in the cryptocurrency ecosystem prevent the abuse of the financial system.”



Source link

Toyota unveils a hydrogen race car concept built for Le Mans 24 Hours


Modern electric vehicles aren’t very practical for endurance races due to the long charging times, but Toyota may have an alternative. Its Gazoo Racing unit has unveiled a GR H2 Racing Concept that’s designed to compete in the Le Mans 24 Hours race’s new hydrogen car category. The automaker isn’t divulging specs, but the appeal is clear: this is an emissions-free car that can spend more time racing less time topping up.

Toyota doesn’t say if or when a race-ready GR H2 will hit the track. The machine is built for “future competition,” the brsays. Don’t be surprised if Toyota refines the concept before bringing it to a Le Mans race.

The company is no stranger to low- zero-emissions motorsports. The brhas been racing a hydrogen engine Corolla in Japan’s Super Taikyu Series since 2021, its GR010 hybrid hypercar took the top two overall podium spots at last year’s Le Mans. A purpose-built hydrogen car like the GR H2 is really an extension of the company’s strategy.

The announcement comes at a delicate moment for Toyota. The make is shifting its focus to EVs after years of resisting the segment in favor of hybrids hydrogen cars. At the same time, new CEO Koji Sato wants to be sure hydrogen remains a “viable option.” The GR H2 may be a hint as to how Toyota tackles this dilemma: it can keep using hydrogen in categories where fast stops are important, such as racing trucking, while courting a passenger car market that insists on EVs like the bZ4X Lexus RZ.



Source link

Watch Summer Game Fest’s Tribeca Games Spotlight here at 3PM ET


The Summer Game Fest party keeps rolling today with the Tribeca Games Spotlight. Unlike many of the other Summer Game Fest showcases, Tribeca has already announced which games it will feature. As in previous years, the festival is highlighting games with a focus on artistic storytelling. You can watch the stream below at 3PM ET.

Arguably the most prominent game of the bunch is The Expanse: A Telltale Series. This is a prequel to the Amazon Prime show of the same name. You’ll play as Camina Drummer (Cara Gee). Players will have to make tough choices that impact the future of a crew of space scavengers. There should be more exploration than in previous Telltale titles as well. Telltale will release the game in chapters every two weeks starting on July 27th.

There will be fresh looks at Stray Gods, a “roleplaying musical” that features much of the cast of The Last of Us, Goodbye Volcano High, a narrative adventure game that first emerged during a PlayStation presentation three years ago. A HighlSong has been on my radar for a while, we’ll find out more details about the so-called rhythm survival platformer during Tribeca’s event.

The stream will highlight a few other games, including Despelote, a story-driven soccer game with an eye-catching art style from publisher Panic. Nightscape is a 2.5D “atmospheric adventure game” from a studio in Qatar, while the Focus-published Chants of Sennaar is an adventure title based on the myth of Babel.

If you’re in New York City, you can be among the first to try playable demos of these games at the festival’s Spring Studios hub. Tribeca runs until June 18th. On the film side, the festival is hosting the world premiere of Hideo Kojima: Connecting Worlds, a documentary about the creative mind behind Death Stranding the Metal Gear series. Kojima will be in attendance for a Q&A.

Meanwhile, Engadget is on the ground in Los Angeles for all things Summer Game Fest. We’ve got previews hands-on impressions of many of the games being featured coming your way.

Catch up on all of the news from Summer Game Fest right here!



Source link

Noise Could Take Years Off Your Life. Here Are the Health Impacts



A looping video of a suburban neighborhood curbside on a cloudy day. Suddenly, a jet plane roars overhead. A graphic overlaid on the video shows a decibel reading ranging from 62 to 94.







On a spring afternoon in Bankers Hill, San Diego, the soundscape is serene: Sea breeze rustles through the trees, neighbors chat pleasantly across driveways.

Except for about every three minutes, when a jet blazes overhead with an ear-piercing roar.

A growing body of research shows that this kind of chronic noise — which rattles the neighborhood over 280 times a day, more than 105,000 each year — is not just annoying. It is a largely unrecognized health threat that is increasing the risk of hypertension, stroke heart attacks worldwide, including for more than 100 million Americans.

We’ve all been told to limit the volume on our headphones to protect our hearing. But it is the relentless din of daily life in some places that can have lasting effects throughout the body.


A looping video of a multilane city road underneath an overpass. Cars trucks stream past in both directions. A graphic overlaid on the video shows a decibel reading ranging from 71 to 81.



Anyone who lives in a noisy environment, like the neighborhoods near this Brooklyn highway, may feel they have adapted to the cacophony. But data shows the opposite: Prior noise exposure primes the body to overreact, amplifying the negative effects.

Even people who live in relatively peaceful rural suburban communities can be at risk. The sudden blare of trains that run periodically through D’Lo, Miss. (population: less than 400), can be especially jarring to the body because there is little ambient noise to drown out the jolt.


A looping video of a home in a wooded neighborhood facing a train track. As an adult small child walk out the door, a train thunders past in a cacophony of horn blasts mechanical commotion. A graphic overlaid on the video shows a decibel reading ranging from 59 to 117.



We went to neighborhoods in rural Mississippi, New York City, suburban California New Jersey to measure residents’ noise exposure interview them about the commotion in their lives. We consulted more than 30 scientists reviewed thousands of pages of research policy to examine the pathology epidemiology of noise.

What noise does to your body

A siren shrills. A dog barks. Engines thrum. Jackhammers clack.











Unpleasant noise enters your body through your ears, but it is relayed to the stress detection center in your brain.

A black white illustration of a woman looking toward her right. A wavelike pink signal, representing noise, is entering her ear.

This area, called the amygdala, triggers a cascade of reactions in your body. If the amygdala is chronically overactivated by noise, the reactions begin to produce harmful effects.

An illustration of the same woman, now showing a representation of her brain anatomy. Two small areas of the brain near her eyes are highlighted, representing the amygdala.

The endocrine system can overreact, causing too much cortisol, adrenaline other chemicals to course through the body.

The illustration now shows sections of the woman’s skeleton internal organs in addition to the brain. A handful of endocrine system regions throughout her body are highlighted, such as the butterfly-shaped thyroid glin her neck the banana-shaped pancreas in her torso.

The sympathetic nervous system can also become hyperactivated, quickening the heart rate, raising blood pressure, triggering the production of inflammatory cells.

The illustration shows a section of the woman’s upper spinal cord nerves, which are highlighted to represent the sympathetic nervous system.

Over time, these changes can lead to inflammation, hypertension plaque buildup in arteries, increasing the risk of heart disease, heart attacks stroke.

The illustration now shows some of the woman’s major arteries throughout her body in her brain.




To understthis pathway, researchers broke it down: They scanned the brains of people as they listened to unpleasant sounds — styrofoam rubbing, nails on a chalkboard, a dentist’s drill — watched live as their amygdalas activated. They also strapped blood pressure monitors noise dosimeters onto auto assembly plant workers during a shift to see their blood pressures heart rates rise with their noise exposure.

To simulate relentless nights, scientists played dozens of sporadic recordings of passing trains planes overhead in healthy volunteers’ bedrooms — recordings taken of real disruptions from people’s homes. They found that the next morning, the volunteers had higher adrenaline levels, stiffened arteries, spikes in plasma proteins that indicate inflammation.

When researchers analyzed the brain scans health records of hundreds of people at Massachusetts General Hospital, they made a stunning discovery: Those who lived in areas with high levels of transportation noise were more likely to have highly activated amygdalas, arterial inflammation — within five years — major cardiac events.

The associations remained even after researchers adjusted for other environmental behavioral factors that could contribute to poor cardiac health, like air pollution, socioeconomic factors, smoking.

In fact, noise may trigger immediate heart attacks: Higher levels of aircraft noise exposure in the two hours preceding nighttime deaths have been tied to heart-related mortality.

How loud is too loud?

Sound is often measured on a scale of decibels, or dB, in which near total silence is zero dB a firecracker exploding within a meter of the listener is about 140 dB.

We used a professional device called a sound level meter to record the decibel levels of common sounds environments.











Compared with a quiet room, a passing freight train peaks at about four times as many decibels.

A chart showing decibel measurements for a quiet room at 27 dB, a busy street at 69 dB, a hair dryer at 87 dB a freight train at 117 dB.

But the difference in how loud the train sounds to the ear is much more dramatic: The train sounds more than 500 times as noisy.

A chart showing the relative loudness of three sounds compared with a quiet room. A busy street is 19 times as loud, a hair dryer is 66 times as loud, a freight train is 516 times as loud.




That’s because the decibel scale is logarithmic, not linear: With every 10 dB increase, the sense of loudness to the ear generally doubles. And that means regular exposure to even a few more decibels of noise above moderate levels can trigger reactions that are harmful to health.

According to the World Health Organization, average road traffic noise above 53 dB or average aircraft noise exposure above about 45 dB are associated with adverse health effects.

Nearly a third of the U.S. population lives in areas exposed to noise levels of at least 45 dB, according to a preliminary analysis based on models of road, rail aircraft noise in 2020 from the Department of Transportation.

This chart shows how many people in the United States may be exposed to various outdoor noise levels, on average. Since transportation patterns in 2020 were low because of the pandemic, researchers suspect that current transportation-related noise could be notably higher.


A chart showing the relative number of people in the United States estimated to live at each of five different noise levels. An estimated three million people may live in areas above an average of 70 dB; nine million in areas from 60 to 70 dB, 39 million from 50 to 60 dB; 44 million from 45 to 50 dB; 232 million in areas below 45 dB.



In this Brooklyn apartment, the windows are closed, but indoor sound levels are consistently above the maximum average levels recommended by the W.H.O.


A looping video of a window looking out over a busy overpass on a cloudy day. Both on the overpass on the roads below, cars flow by steadily in both directions. A graphic overlaid on the video shows a decibel reading ranging from 53 to 65.

Brooklyn-Queens Expressway


The nighttime noise that a person in such an environment experiences is considered particularly detrimental to health because it can fragment sleep trigger a stress response, even if the person does not recall being roused.

The W.H.O. has long recommended less than 40 dB as an annual average of nighttime noise outside bedrooms to prevent negative health effects, less than 30 dB of nighttime noise inside bedrooms for high-quality sleep. That’s even quieter than inside this house in D’Lo, when a train isn’t going by.


A looping video of a window looking out over a railroad track lush greenery bathed in dappled sunlight. A graphic overlaid on the video shows a decibel reading ranging from 32 to 39.

D’Lo, Miss., in between trains.


Mounting research suggests that the relationship between noise levels disease is eerily consistent: A study following more than four million people for more than a decade, for example, found that, starting at just 35 dB, the risk of dying from cardiovascular disease increased by 2.9 percent for every 10 dB increase in exposure to road traffic noise.

The increase in risk of dying from a heart attack was even more pronounced: Also starting at just 35 dB, it increased by 4.3 percent for every 10 dB increase in road traffic noise.

Not all loud noise is equal

At High Tech Middle School in Point Loma, San Diego — less than a mile from the runway of San Diego International Airport — the roofs above classrooms are heavily insulated to mitigate the rumble. But students still have a term for an aircraft interruption so loud that it halts discussion: the Point Loma Pause.


A looping video of middle high schoolers walking about campus. The chatter of students is interrupted by a jet engine roaring just overhead in the cloudy sky. A graphic overlaid on the video shows a decibel reading ranging from 63 to 79.



Scientists believe that pronounced fluctuations in noise levels like this might compound the effects on the body. They suspect jarring sounds that break through the ambience — recurring jet engines, a pulsating leaf blower, or the brassy whistle of trains — are more detrimental to health than the continuous whirring of a busy roadway, even if the average decibel levels are comparable.

To visualize the concept, Swiss researchers measured compared transportation noise along a highway with a railroad track, over the course of a night.





In a subsequent Swiss study, higher degrees of nighttime “noise intermittency” — or the extent to which sound events were distinguishable from the background levels — were associated with heart disease, heart attacks, heart failure strokes.

Who is most at risk?

As with so many health issues, poor people communities of color are more likely to experience excessive noise exposure because they often have fewer housing choices are more likely to live near high-traffic roads, raucous waste dumps industrial areas.

According to a study of more than 94,000 schools, students in those estimated to be most highly exposed to road or aviation noise were significantly more likely to be eligible for free or reduced-price meals to be Hispanic, Black, or Asian/Pacific Islander. Such excess noise in schools is associated with heightened stress hormones, lower reading scores even hyperactivity among children.

Nighttime noise shows similar inequities. Census data shows that city communities with almost no low-income residents averaged 44 dB at night, compared with about 47 dB in those where half of residents fall below the poverty line. Neighborhoods with almost no Black residents averaged about 42 dB at night, compared with about 46 dB in communities that were three-fourths Black.

The difference of a few dBs might not seem like much, but for every one dB increase, the risk of developing cardiovascular disease climbs by roughly another percentage point, according to a preliminary analysis of more than 100,000 U.S. nurses. And as dBs climb, so too do associations with death because of cardiovascular disease heart attack.

The disparities in noise exposure are likely to be much larger than the noise model suggests, researchers said, since wealthier households schools are more likely to install triple-pane windows more insulation. And the inequities are not unique to the United States: Spatial modeling has revealed similar disparities within various countries across four other continents.

What can be done?

Fifty years ago, under the Noise Control Act of 1972, the newly formed Environmental Protection Agency was a trailblazer in recognizing the danger of noise addressing it: It educated the public, established safety limits, published deep analyses on various culprits recommended actions to mitigate harm.

But its office of noise abatement was defunded by the Reagan administration, rendering policies unenforceable regulatory criteria obsolete. The Occupational Safety Health Administration’s eight-hour workplace noise limit is still 90 dB.

European countries have far outpaced the rest of the world in regulating noise. The European Union requires member nations to monitor assess sound levels across regions to produce new action plans every five years to address communities at greatest risk. The E.U. now mandates quiet brake locks on rail freight fleets noise labels on outdoor power equipment; it also requires noise reduction in car manufacturing mitigation efforts at airports.

Individual cities countries have taken additional measures. Paris has installed noise cameras that measure the sound level of vehicles fine drivers who exceed them. Berlin has used new bike lanes to reduce the flow of engine-powered vehicles move the source of the noise to the center of the road, away from houses. Switzerlhas introduced national “quiet hours” — overnight, one midday hour on weekdays, all day on Sundays.

While scientists say it’s too soon to make a prediction about the effects of these policies on cardiovascular health, several European countries have reported tens of thousands fewer residents exposed to major sources of noise.

Like many health issues, protection against noise would be economically advantageous. Economists who analyzed health care spending productivity loss because of heart disease hypertension have argued that a 5 dB reduction in U.S. noise could result in an annual benefit of $3.9 billion.

But unlike most other contributors to heart disease, noise cannot be addressed fully between a patient a doctor. Protection requires changes in local, state federal policy.

In the meantime, in D’Lo, Miss., George Jackson has repeatedly jacked his home to decrease the vibration. In Mendenhall, Carolyn Fletcher tried resealing her windows. In Bankers Hill, Ron Allen says all he can do is take vitamin supplements plug his ears.


Sources methodology


For the decibel graphic on the videos the graphic comparing decibel levels, we measured decibels using a SoundAdvisor Model 831C sound level meter from Larson Davis. In both cases, we show A-weighted decibels to emphasize the frequencies that are available to the human ear that are commonly used in health studies regulatory requirements. For each video, we positioned the sound level meter next to the camera, which was about shoulder height.


For the decibel graphic, we measured sound levels in an empty room; on the sidewalk of a busy New York City street; a few inches away from a hair dryer in a quiet room. The videos show decibel changes on a linear scale.


Most research policy cited in this article used A-weighted measurements.


Estimates of the number of people in the United States exposed to each decibel range do not include U.S. territories are from Department of Transportation data analyzed by Edmund Seto Ching-Hsuan Huang at the University of Washington.


The data for the Swiss transportation noise chart was provided by Jean Marc Wunderli at the Swiss Federal Laboratories for Materials Science Technology, it was derived from research in the Journal of Exposure Science Environmental Epidemiology.


Anatomy references are from the third edition of “Anatomische Atlas,” edited by Anne M. Gilroy, Brian R. MacPherson Jamie C. Wikenheiser.


Additional sources


Jamie Banks, president of Quiet Communities chair of the Noise & Health Committee at the American Public Health Association


Dr. Mathias Basner, sleep health researcher, University of Pennsylvania


Stuart Batterman, professor of environmental health sciences, University of Michigan


Rachel Buxton, soundscape ecologist, Carleton University


Joan Casey, assistant professor, University of Washington School of Public Health


Timothy William Collins, professor of geography, University of Utah


Andreas Daiber, molecular cardiologist, University Medical Center Mainz


Gary Evans, environmental developmental psychologist, Cornell University


Dr. Daniel Fink, board chair, The Quiet Coalition


Kurt Fristrup, affiliate research scientist at Colorado State University, retired sound researcher at the National Park Service


Ching-Hsuan Huang, doctoral candidate, University of Washington


Chandra Jackson, cardiovascular epidemiologist investigator, National Institutes of Health


Peter James, environmental epidemiologist, Harvard Medical School


Chucri Kardous, retired research engineer, National Institute for Occupational Safety Health


Nina Lee, doctoral student research assistant at the Brown Community Noise Lab


Dr. Thomas Münzel, chief of cardiology, University Medical Center Mainz


Dr. Jose V. Pardo, professor of psychiatry, University of Minnesota


Dr. Andrei Pyko, environmental epidemiologist, Karolinska Institutet


Rebecca Rolland, speech-language pathologist Harvard lecturer


Charlie Roscoe, postdoctoral fellow, Harvard University


Edmund Seto, associate professor of Environmental Occupational Health Sciences, University of Washington


Ed Strocko, director of the Office of Spatial Analysis Visualization, Bureau of Transportation Statistics


Dr. Ahmed Tawakol, associate professor of medicine, Harvard Medical School


Danielle Vienneau, group leader, Swiss Tropical Public Health Institute


Erica Walker, assistant professor of epidemiology, Brown University School of Public Health


Jean Marc Wunderli, chair of the acoustics noise control lab, Swiss Federal Laboratories for Materials Science Technology


Special thanks to community members in D’Lo, Mendenhall Braxton, Miss.; Loma Portal, Ocean Beach Bankers Hill in San Diego, Calif.; South Orange, N.J.; Greenpoint, Brooklyn.



Source link

Generative AI can help bring tomorrow’s gaming NPCs to life


Elves Argonians clipping through walls stepping through tables, blacksmiths who won’t acknowledge your existence until you take single step to the left, Draugers that drop into rag-doll seizures the moment you put an arrow through their eye — Bethesda’s Elder Scrolls long-running RPG series is beloved for many reasons, the realism of their non-playable characters (NPCs) is not among them. But the days of hearing the same rote quotes watching the same half-hearted search patterns perpetually repeated from NPCs are quickly coming to an end. It’s all thanks to the emergence of generative chatbots that are helping game developers craft more lifelike, realistic characters in-game action.

“Game AI is seldom about any deep intelligence but rather about the illusion of intelligence,” Steve Rabin, Principal Software Engineer at Electronic Arts , wrote in the 2017 essay, The Illusion of Intelligence. “Often we are trying to create believable human behavior, but the actual intelligence that we are able to program is fairly constrained painfully brittle.”

Just as with other forms of media, video games require the player to suspend their disbelief for the illusions to work. That’s not a particularly big ask given the fundamentally interactive nature of gaming, “Players are incredibly forgiving as long as the virtual humans do not make any glaring mistakes,” Rabin continued. “Players simply need the right clues suggestions for them to share fully participate in the deception.”

Early days

Take Space Invaders Pac-Mac, for example. In Space Invaders, the falling enemies remained steadfast on their zig-zag path towards Earth’s annihilation, regardless of the player’s actions, with the only change coming as a speed increase when they got close enough to the ground. There was no enemy intelligence to speak of, only the player’s skill in leading targets would carry the day. Pac-Man, on the other hand, used enemy interactions as a tentpost of gameplay.

Under normal circumstances, the Ghost Gang will coordinate to track trap The Pac — unless the player gobbled up a Power Pellet before vengefully hunting down Blinky, Pinky, Inky Clyde. That simple, two-state behavior, essentially a fancy if-then statement in C, proved revolutionary for the nascent gaming industry became a de facto method of programming NPC reactions for years to come using finite-state machines (FSMs).

Finite-state machines

A finite-state machine is a mathematical model that abstracts a theoretical “machine” capable of existing in any number of states — ally/enemy, alive/dead, red/green/blue/yellow/black — but occupying exclusively one state at a time. It consists, “of a set of states a set of transitions making it possible to go from one state to another one,” Viktor Lundstrom wrote in 2016’s Human-like decision making for bots in mobile gaming. “A transition connects two states but only one way so that if the FSM is in a state that can transit to another state, it will do so if the transition requirements are met. Those requirements can be internal like how much health a character has, or it can be external like how big of a threat it is facing.”

Like light switches in Half-Life Fallout, or the electric generators in Dead Island: FSM’s are either on or they’re off or they’re in a rigidly defined alternative state (real world examples would include a traffic light or your kitchen microwave). These machines can transition back forth between states given the player’s actions but half measures like dimmer switches low power modes do not exist in these universes. There are few limits on the number of states that an FSM can exist in beyond the logistical challenges of programming maintaining them all, as you can see with the Ghost Gang’s behavioral flowcharts on Jared Mitchell’s blog post, AI Programming Examples. Lundstrom points out that FSM, “offers lots of flexibility but has the downside of producing a lot of method calls” which tie up additional system resources.

Decision behavior trees

Alternately, game AIs can be modeled using decision trees. “There are usually no logical checks such as AND or OR because they are implicitly defined by the tree itself,” Lundstrom wrote, noting that the trees “can be built in a non-binary fashion making each decision have more than two possible outcomes.”

Behavior trees are a logical step above that offer players contextual actions to take by chaining multiple smaller decision actions together. For example, if the character is faced with the task of passing through a closed door, they can either perform the action to turn the handle to open it or, upon finding the door locked, take the “composite action” of pulling a crowbar from inventory breaking the locking mechanism.

“Behavior trees use what is called a reactive design where the AI tends to try things makes its decisions from things it has gotten signals from,” Lundstrom explained. “This is good for fast phasing games where situations change quite often. On the other hand, this is bad in more strategic games where many moves should be planned into the future without real feedback.”

GOAPs RadiantAI

From behavior trees grew GOAPs (Goal-Oriented Action Planners), which we first saw in 2005’s F.E.A.R. An AI agent empowered with GOAP will use the actions available to choose from any number of goals to work towards, which have been prioritized based on environmental factors. “This prioritization can in real-time be changed if as an example the goal of being healthy increases in priority when the health goes down,” Lundstrom wrote. He asserts that they are “a step in the right direction” but suffers the drawback that “it is harder to understconceptually implement, especially when bot behaviors come from emergent properties.”

Radiant AI, which Bethesda developed first for Elder Scrolls IV: Oblivion then adapted to Skyrim, Fallout 3, Fallout 4 Fallout: New Vegas, operates on a similar principle to GOAP. Whereas NPCs in Oblivion were only programmed with five or six set actions, resulting in highly predictable behaviors, by Skyrim, those behaviors had expanded to location-specific sets, so that NPCs working in mines lumber yards wouldn’t mirror the movements of folks in town. What’s more, the character’s moral social standing with the NPC’s faction in Skyrim began to influence the AI’s reactions to the player’s actions. “Your friend would let you eat the apple in his house,” Bethesda Studios creative director Todd Howard told Game Informer in 2011, rather than reporting you to the town guard like they would if the relationship were strained.

Modern AIs

Naughty Dog’s The Last of Us series offers some of today’s most advanced NPC behaviors for enemies allies alike. “Characters give the illusion of intelligence when they are placed in well thought-out setups, are responsive to the player, play convincing animations sounds, behave in interesting ways,” Mark Botta, Senior Software Engineer at Ripple Effect Studios, wrote in Infected AI in The Last of Us. “Yet all of this is easily undermined when they mindlessly run into walls or do any of the endless variety of things that plague AI characters.”

“Not only does eliminating these glitches provide a more polished experience,” he continued, “but it is amazing how much intelligence is attributed to characters that simply don’t do stupid things.”

You can see this in both the actions of enemies, whether they’re human Hunters or infected Clickers, or allies like Joel’s ward, Ellie. The game’s two primary flavors of enemy combatant are built on the same base AI system but “feel fundamentally different” from one another thanks to a “modular AI architecture that allows us to easily add, remove, or change decision-making logic,” Botta wrote.

The key to this architecture was never referring to the enemy character types in the code but rather, “[specifying] sets of characteristics that define each type of character,” Botta said. “For example, the code refers to the vision type of the character instead of testing if the character is a Runner or a Clicker … Rather than spreading the character definitions as conditional checks throughout the code, it centralizes them in tunable data.” Doing so empowers the designers to adjust character variations directly instead of having to ask for help from the AI team.

The AI system is divided into high-level logic (aka “skills”) that dictate the character’s strategy the low-level “behaviors” that they use to achieve the goal. Botta points to a character’s “move-to behavior” as one such example. So when Joel Ellie come across a crowd of enemy characters, their approach either by stealth or by force is determined by that character’s skills.

“Skills decide what to do based on the motivations capabilities of the character, as well as the current state of the environment,” he wrote. “They answer questions like ‘Do I want to attack, hide, or flee?’ ‘What is the best place for me to be?’” And then once the character/player makes that decision, the lower level behaviors trigger to perform the action. This could be Joel automatically ducking into cover drawing a weapon or Ellie scampering off to a separate nearby hiding spot, avoiding obstacles enemy sight lines along the way (at least for the Hunters — Clickers can hear you breathing).

Tomorrow’s AIs

Generative AI systems have made headlines recently due in large part to the runaway success of next-generation chatbots from Google, Meta, OpenAI others, but they’ve been a mainstay in game design for years. Dwarf Fortress Black Rock Galactic just wouldn’t be the same without their procedurally generated levels environments — but what if we could apply those generative principles to dialog creation too? That’s what Ubisoft is attempting with its new Ghostwriter AI.

“Crowd chatter barks are central features of player immersion in games – NPCs speaking to each other, enemy dialogue during combat, or an exchange triggered when entering an area all provide a more realistic world experience make the player feel like the game around them exists outside of their actions,” Ubisoft’s Roxane Barth wrote in a March blog post. “However, both require time creative effort from scriptwriters that could be spent on other core plot items. Ghostwriter frees up that time, but still allows the scriptwriters a degree of creative control.”

The use process isn’t all that different from messing around with public chatbots like BingChat Bard, albeit with a few important distinctions. The scriptwriter will first come up with a character the general idea of what that person would say. That gets fed into Ghostwriter which then returns a rough list of potential barks. The scriptwriter can then choose a bark edit it to meet their specific needs. The system will generate these barks in pairs selecting one over the other serves as a quick training refinement method, learning from the preferred choice and, with a few thousrepetitions, begins generating more accurate desirable barks from the outset.

“Ghostwriter was specifically created with games writers, for the purpose of accelerating their creative iteration workflow when writing barks [short phrases]” Yves Jacquier, Executive Director at Ubisoft La Forge, told Engadget via email. “Unlike other existing chatbots, prompts are meant to generate short dialogue lines, not to create general answers.”

“From here, there are two important differences,” Jacquier continued. “One is on the technical aspect: for using Ghostwriter writers have the ability to control give input on dialogue generation. Second, it’s a key advantage of having developed our in-house technology: we control on the costs, copyrights confidentiality of our data, which we can re-use to further train our own model.”

Ghostwriter’s assistance doesn’t just make scriptwriters’ jobs easier, it in turn helps improve the overall quality of the game. “Creating believable large open worlds is daunting,” Jacquier said. “As a player, you want to explore this world feel that each character each situation is unique, involve a vast variety of characters in different moods with different backgrounds. As such there is a need to create many variations to any mundane situation, such as one character buying fish from another in a market.”

Writing 20 different iterations of ways to shout “fish for sale” is not the most effective use of a writer’s time. “They might come up with a handful of examples before the task might become tedious,” Jacquier said. “This is exactly where Ghostwriter kicks in: proposing such dialogs their variations to a writer, which gives the writer more variations to work with more time to polish the most important narrative elements.”

Ghostwriter is one of a growing number of generative AI systems Ubisoft has begun to use, including voice synthesis text-to-speech. “Generative AI has quickly found its use among artists creators for ideation or concept art,“ Jacquier said, but clarified that humans will remain in charge of the development process for the foreseeable future, regardless of coming AI advancements . “Games are a balance of technological innovation creativity what makes great games is our talent – the rest are tools. While the future may involve more technology, it doesn’t take away the human in the loop.”

7.4887 billion reasons to get excited

Per a recent Market.us report, the value of generative AI in the gaming market could as much as septuple by 2032. Growing from around $1.1 billion in 2023 to nearly $7.5 billion in the next decade, these gains will be driven by improvements to NPC behaviors, productivity gains by automating digital asset generation procedurally generated content creation.

And it won’t just be major studios cranking out AAA titles that will benefit from the generative AI revolution. Just as we are already seeing dozens hundreds of mobile apps built atop ChatGPT mushrooming up on Google Play the App Store for myriad purposes, these foundational models (not necessarily Ghostwriter itself but its invariable open-source derivative) are poised to spawn countless tools which will in turn empower indie game devs, modders individual players alike. And given how quickly the need to know how to program in proper code rather than natural language is falling off, our holodeck immersive gaming days could be closer than we ever dared hope.

Catch up on all of the news from Summer Game Fest right here!

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.



Source link

1 2 3 4 1,184