CLAW Library (a C++ Library Absolutely Wonderful) 1.5.5
|
Find an action with the alpha-beta algorithm. More...
#include <game_ai.hpp>
Public Types | |
typedef State | state |
typedef State::action | action |
typedef State::score | score |
Public Member Functions | |
score | operator() (int depth, const state ¤t_state, bool computer_turn) const |
Apply the alpha-beta algorithm to find the best action. | |
Private Member Functions | |
score | compute (int depth, const state ¤t_state, bool computer_turn, score alpha, score beta) const |
Find the best action using an alpha-beta algorithm. |
Find an action with the alpha-beta algorithm.
Template parameters:
Definition at line 162 of file game_ai.hpp.
typedef State::action claw::ai::game::alpha_beta< State >::action |
Definition at line 166 of file game_ai.hpp.
typedef State::score claw::ai::game::alpha_beta< State >::score |
Definition at line 167 of file game_ai.hpp.
typedef State claw::ai::game::alpha_beta< State >::state |
Definition at line 165 of file game_ai.hpp.
claw::ai::game::alpha_beta< State >::score claw::ai::game::alpha_beta< State >::compute | ( | int | depth, |
const state & | current_state, | ||
bool | computer_turn, | ||
score | alpha, | ||
score | beta | ||
) | const [private] |
Find the best action using an alpha-beta algorithm.
depth | Depth of the search subtree we are allowed to explore. |
current_state | The state of the game. |
computer_turn | Tell if the next action is done by the computer. |
alpha | Worst score of the current player. |
beta | Best score of the other player. |
Definition at line 240 of file game_ai.tpp.
{ score score_val; // we reached a final state or we are not allowed to search more. if ( current_state.final() || (depth == 0) ) score_val = current_state.evaluate(); else { std::list<action> next_actions; typename std::list<action>::const_iterator it; State* new_state; // get all reachable states current_state.next_actions( next_actions ); if ( next_actions.empty() ) score_val = current_state.evaluate(); else if (computer_turn) { score_val = current_state.min_score(); it = next_actions.begin(); while ( it!=next_actions.end() && (score_val < beta) ) { new_state=static_cast<state*>(current_state.do_action(*it)); // evaluate the action of the human player score s = compute ( depth-1, *new_state, false, std::max(alpha, score_val), beta ); // and keep the best action he can do. if (s > score_val) score_val = s; delete new_state; ++it; } } else // human player's turn { score_val = current_state.max_score(); it = next_actions.begin(); while ( it!=next_actions.end() && (score_val > alpha) ) { new_state=static_cast<state*>(current_state.do_action(*it)); // evaluate the action of the computer player score s = compute ( depth-1, *new_state, true, alpha, std::min(beta, score_val) ); // and keep the worst action he can do if (s < score_val) score_val = s; ++it; delete new_state; } } } return score_val; } // alpha_beta::compute()
State::score claw::ai::game::alpha_beta< State >::operator() | ( | int | depth, |
const state & | current_state, | ||
bool | computer_turn | ||
) | const |
Apply the alpha-beta algorithm to find the best action.
depth | Depth of the search subtree we are allowed to explore. |
current_state | The state of the game. |
computer_turn | Tell if the next action is done by the computer. |
Definition at line 221 of file game_ai.tpp.
{ return this->compute ( depth, current_state, computer_turn, current_state.min_score(), current_state.max_score() ); } // alpha_beta::operator()