Program Poker Ai

Posted By admin On 04/04/22

Expert-Level Artificial Intelligence in Heads-Up No-Limit Poker

Links

Jun 23, 2018 Some time ago I came across Libratus, a bot made by Carnegie Mellon for playing heads up (1 vs 1) no limit hold’em. Libratus proved its worth by beating some of the world’s best poker players. PokerSnowie is a powerful AI poker training software program designed to help you improve your game and ultimately increase your profits at the poker table. Available for both cash games and tournaments, the main purpose of PokerSnowie is to teach you how to become un-exploitable in any situation. An artificial-intelligence program known as Libratus has beat the world's absolute best human poker players in a 20-day No-Limit Texas Hold'em tournament, defeating four opponents by about $1.77. Advanced Poker Bot has been programmed to play in a certain way based on mathematical calculations. Advanced Poker Bot provides a lot of custom functions that allows 'poker bot coders' to plug in. We wanted the Smart AI to play based on features and patterns, and not based on a set of rules, which is what the Dumb AI was doing. Once we created a Smart AI model, we started training it using Q-Learning. We used the Dumb AI's neural network weights to begin with, but after 10,000 epochs of training, stopped, due to extremely clear errors.

Twitch YouTube Twitter
Downloads & Videos Media Contact

DeepStack bridges the gap between AI techniques for games of perfect information—like checkers, chess and Go—with ones for imperfect information games–like poker–to reason while it plays using “intuition” honed through deep learning to reassess its strategy with each decision.

With a study completed in December 2016 and published in Science in March 2017, DeepStack became the first AI capable of beating professional poker players at heads-up no-limit Texas hold'em poker.

DeepStack computes a strategy based on the current state of the game for only the remainder of the hand, not maintaining one for the full game, which leads to lower overall exploitability.

DeepStack avoids reasoning about the full remaining game by substituting computation beyond a certain depth with a fast-approximate estimate. Automatically trained with deep learning, DeepStack's “intuition” gives a gut feeling of the value of holding any cards in any situation.

DeepStack considers a reduced number of actions, allowing it to play at conventional human speeds. The system re-solves games in under five seconds using a simple gaming laptop with an Nvidia GPU.

Program

The first computer program to outplay human professionals at heads-up no-limit Hold'em poker

In a study completed December 2016 and involving 44,000 hands of poker, DeepStack defeated 11 professional poker players with only one outside the margin of statistical significance. Over all games played, DeepStack won 49 big blinds/100 (always folding would only lose 75 bb/100), over four standard deviations from zero, making it the first computer program to beat professional poker players in heads-up no-limit Texas hold'em poker.

Games are serious business

Poker

Don’t let the name fool you, “games” of imperfect information provide a general mathematical model that describes how decision-makers interact. AI research has a long history of using parlour games to study these models, but attention has been focused primarily on perfect information games, like checkers, chess or go. Poker is the quintessential game of imperfect information, where you and your opponent hold information that each other doesn't have (your cards).

Until now, competitive AI approaches in imperfect information games have typically reasoned about the entire game, producing a complete strategy prior to play. However, to make this approach feasible in heads-up no-limit Texas hold’em—a game with vastly more unique situations than there are atoms in the universe—a simplified abstraction of the game is often needed.

A fundamentally different approach

DeepStack is the first theoretically sound application of heuristic search methods—which have been famously successful in games like checkers, chess, and Go—to imperfect information games.

At the heart of DeepStack is continual re-solving, a sound local strategy computation that only considers situations as they arise during play. This lets DeepStack avoid computing a complete strategy in advance, skirting the need for explicit abstraction.

During re-solving, DeepStack doesn’t need to reason about the entire remainder of the game because it substitutes computation beyond a certain depth with a fast approximate estimate, DeepStack’s 'intuition' – a gut feeling of the value of holding any possible private cards in any possible poker situation.

Finally, DeepStack’s intuition, much like human intuition, needs to be trained. We train it with deep learning using examples generated from random poker situations.

DeepStack is theoretically sound, produces strategies substantially more difficult to exploit than abstraction-based techniques and defeats professional poker players at heads-up no-limit poker with statistical significance.

Download

Paper & Supplements

Hand Histories

Members (Front-back)

Michael Bowling, Dustin Morrill, Nolan Bard, Trevor Davis, Kevin Waugh, Michael Johanson, Viliam Lisý, Martin Schmid, Matej Moravčík, Neil Burch

low-variance Evaluation

Poker

The performance of DeepStack and its opponents was evaluated using AIVAT, a provably unbiased low-variance technique based on carefully constructed control variates. Thanks to this technique, which gives an unbiased performance estimate with 85% reduction in standard deviation, we can show statistical significance in matches with as few as 3,000 games.

Abstraction-based Approaches

Despite using ideas from abstraction, DeepStack is fundamentally different from abstraction-based approaches, which compute and store a strategy prior to play. While DeepStack restricts the number of actions in its lookahead trees, it has no need for explicit abstraction as each re-solve starts from the actual public state, meaning DeepStack always perfectly understands the current situation.

Professional Matches

We evaluated DeepStack by playing it against a pool of professional poker players recruited by the International Federation of Poker. 44,852 games were played by 33 players from 17 countries. Eleven players completed the requested 3,000 games with DeepStack beating all but one by a statistically-significant margin. Over all games played, DeepStack outperformed players by over four standard deviations from zero.


Heuristic Search

At a conceptual level, DeepStack’s continual re-solving, “intuitive” local search and sparse lookahead trees describe heuristic search, which is responsible for many AI successes in perfect information games. Until DeepStack, no theoretically sound application of heuristic search was known in imperfect information games.

','resolveObject':','resolvedBy':'manual','resolved':true}'>
','resolvedBy':'manual','resolved':true}'>
','resolveObject':','resolvedBy':'manual','resolved':true}'>

Libratus is an artificial intelligence computer program designed to play poker, specifically heads up no-limitTexas hold 'em. Libratus' creators intend for it to be generalisable to other, non-Poker-specific applications. It was developed at Carnegie Mellon University, Pittsburgh.

Background[edit]

While Libratus was written from scratch, it is the nominal successor of Claudico. Like its predecessor, its name is a Latin expression and means 'balanced'.

Libratus was built with more than 15 million core hours of computation as compared to 2-3 million for Claudico. The computations were carried out on the new 'Bridges' supercomputer at the Pittsburgh Supercomputing Center. According to one of Libratus' creators, Professor Tuomas Sandholm, Libratus does not have a fixed built-in strategy, but an algorithm that computes the strategy. The technique involved is a new variant of counterfactual regret minimization,[1] namely the CFR+ method introduced in 2014 by Oskari Tammelin.[2] On top of CFR+, Libratus used a new technique that Sandholm and his PhD student, Noam Brown, developed for the problem of endgame solving. Their new method gets rid of the prior de facto standard in Poker programming, called 'action mapping'.

As Libratus plays only against one other human or computer player, the special 'heads up' rules for two-player Texas hold 'em are enforced.

Program Poker Ai Online

2017 humans versus AI match[edit]

From January 11 to 31, 2017, Libratus was pitted in a tournament against four top-class human poker players,[3] namely Jason Les, Dong Kim, Daniel McAulay and Jimmy Chou. In order to gain results of more statistical significance, 120,000 hands were to be played, a 50% increase compared to the previous tournament that Claudico played in 2015. To manage the extra volume, the duration of the tournament was increased from 13 to 20 days.

The four players were grouped into two subteams of two players each. One of the subteams was playing in the open, while the other subteam was located in a separate room nicknamed 'The Dungeon' where no mobile phones or other external communications were allowed. The Dungeon subteam got the same sequence of cards as was being dealt in the open, except that the sides were switched: The Dungeon humans got the cards that the AI got in the open and vice versa. This setup was intended to nullify the effect of card luck.

The prize money of $200,000 was shared exclusively between the human players. Each player received a minimum of $20,000, with the rest distributed in relation to their success playing against the AI. As written in the tournament rules in advance, the AI itself did not receive prize money even though it won the tournament against the human team.

During the tournament, Libratus was competing against the players during the days. Overnight it was perfecting its strategy on its own by analysing the prior gameplay and results of the day, particularly its losses. Therefore, it was able to continuously straighten out the imperfections that the human team had discovered in their extensive analysis, resulting in a permanent arms race between the humans and Libratus. It used another 4 million core hours on the Bridges supercomputer for the competition's purposes.

Aim

Strength of the AI[edit]

Libratus had been leading against the human players from day one of the tournament. The player Dong Kim was quoted on the AI's strength as follows: 'I didn’t realize how good it was until today. I felt like I was playing against someone who was cheating, like it could see my cards. I’m not accusing it of cheating. It was just that good.'[4]

At the 16th day of the competition, Libratus broke through the $1,000,000 barrier for the first time. At the end of that day, it was ahead $1,194,402 in chips against the human team. At the end of the competition, Libratus was ahead $1,766,250 in chips and thus won resoundingly. As the big blind in the matches was set to $100, Libratus winrate is equivalent to 14.7 big blinds per 100 hands. This is considered an exceptionally high winrate in poker and is highly statistically significant.[5]

Of the human players, Dong Kim came first, MacAulay second, Jimmy Chou third, and Jason Les fourth.

NameRankResults (in chips)
Dong Kim1-$85,649
Daniel MacAulay2-$277,657
Jimmy Chou3-$522,857
Jason Les4-$880,087
Total:-$1,766,250

Other possible applications[edit]

While Libratus' first application was to play poker, its designers have a much broader mission in mind for the AI.[6] The investigators designed the AI to be able to learn any game or situation in which incomplete information is available and 'opponents' may be hiding information or even engaging in deception. Because of this Sandholm and his colleagues are proposing to apply the system to other, real-world problems as well, including cybersecurity, business negotiations, or medical planning.[7]

See also[edit]

References[edit]

  1. ^Hsu, Jeremy (10 January 2017). 'Meet the New AI Challenging Human Poker Pros'. IEEE Spectrum. Retrieved 2017-01-15.
  2. ^Brown, Noam; Sandholm, Tuomas (2017). 'Safe and Nested Endgame Solving for Imperfect-Information Games'(PDF). Proceedings of the AAAI workshop on Computer Poker and Imperfect Information Games.
  3. ^Spice, Byron; Allen, Garrett (January 4, 2017). 'Upping the Ante: Top Poker Pros Face Off vs. Artificial Intelligence'. Carnegie Mellon University. Retrieved 2017-01-12.
  4. ^Metz, Cade (24 January 2017). 'Artificial Intelligence Is About to Conquer Poker—But Not Without Human Help'. Wired. Retrieved 2017-01-24.
  5. ^'Libratus Poker AI Beats Humans for $1.76m; Is End Near?'. PokerListings. 30 January 2017. Retrieved 2018-03-16.
  6. ^Knight, Will (January 23, 2017). 'Why it's a big deal that AI knows how to bluff in poker'. MIT Technology Review.
  7. ^'Artificial Intelligence Wins $800,000 Against 4 Poker Masters'. Interesting Engineering. 27 January 2017.

External links[edit]

Program Poker Ai Games

  • Brains versus Artificial Intelligence official website at the Rivers Casino

Program Poker Aim

Retrieved from 'https://en.wikipedia.org/w/index.php?title=Libratus&oldid=993874605'