k-ToM Functions#

These are the internal functions used for the k-ToM agents. In general, we don’t recommend using these directly, as they are mostly for internal use in the TOM agent.

tomsup.ktom_functions#

This script contains all function related to the implementation of the k-ToM agent.

tomsup.ktom_functions.decision_function(new_internal_states: dict, params: dict, agent: int, level: int, p_matrix: PayoffMatrix) Tuple[float, float][source]#

The decision function of the k-ToM agent

Parameters:
  • new_internal_states (dict) – Dict of updated internal states

  • params (dict) – The parameters

  • agent (int) – the perspective of the agent in the payoff matrix

  • level (int) – The sophistication level of the agent

  • p_matrix (PayoffMatrix) – a payoff matrix

Returns:

a tuple contain probability of self choosing 1 and op choosing 1.

Return type:

Tuple[float, float]

Examples

>>> penny = PayoffMatrix(name = "penny_competitive")
>>> new_internal_states = {'opponent_states': {},         'own_states': {'p_op_mean0': 30, 'p_op_var0': 2}}
>>> params = {'volatility': -2, 'b_temp': -1}
>>> decision_function(new_internal_states, params, agent = 0,             level = 0, p_matrix = penny)
-5.436561973742046
tomsup.ktom_functions.expected_payoff_fun(p_op: float, agent: int, p_matrix: PayoffMatrix)[source]#

Calculate expected payoff of choosing 1 over 0

Parameters:
  • p_op (float) – The probability of the opponent choosing 1

  • agent (int) – The perspective of the agent

  • p_matrix (PayoffMatrix) – A payoff matrix

Returns:

The expected payoff

Examples

>>> staghunt = PayoffMatrix(name = 'staghunt')
>>> expected_payoff_fun(1, agent = 0, p_matrix = staghunt)
2
tomsup.ktom_functions.gradient_update(params, p_op_mean, param_mean, sim_prev_internal_states, sim_self_choice, sim_op_choice, sim_level, sim_agent, p_matrix, **kwargs)[source]#

The gradient update of the k-ToM agent

tomsup.ktom_functions.init_k_tom(params: dict, level: int, priors: Union[dict, str] = 'default')[source]#

Initialization function of the k-ToM agent

Parameters:
  • params (dict) – The starting parameters

  • level (int) – The sophistication level of the agent

  • priors (Union[dict, str], optional) – The priors of the k-ToM. Default to “default”. See tutorial on how to set internal states of the k-ToM agent.

Examples

>>> init_k_tom(params = {'volatility': -2, 'b_temp': -1, 'bias':0 }, level=1, priors='default')
tomsup.ktom_functions.k_tom(prev_internal_states: dict, params: dict, self_choice: int, op_choice: int, level: int, agent: int, p_matrix: PayoffMatrix, **kwargs) Tuple[int, dict][source]#

The full k-ToM implementation

Parameters:
  • prev_internal_states (dict) – Dict of previous internal states

  • params (dict) – The parameters

  • self_choice (int) – the agent choice the previous round

  • op_choice (int) – The opponents choice the previous round

  • level (int) – The sophistication level of the agent

  • agent (int) – the perspective of the agent in the payoff matrix

  • p_matrix (PayoffMatrix) – a payoff matrix

Returns:

a tuple containing the choice and the updated internal states

Return type:

Tuple[int, dict]

tomsup.ktom_functions.learning_function(prev_internal_states: dict, params: dict, self_choice: int, op_choice: int, level: int, agent: int, p_matrix: PayoffMatrix, **kwargs) dict[source]#

The general learning function for the k-ToM agent

Parameters:
  • prev_internal_states (dict) – Previous internal states

  • params (dict) – The parameters

  • self_choice (int) – Previous choice of the agent

  • op_choice (int) – Opponents choice

  • level (int) – sophistication level

  • agent (int) – the perspective of the agent in the payoff matrix (0 or 1)

  • p_matrix (PayoffMatrix) – a payoff matrix

Returns:

The updated internal states

Return type:

dict

tomsup.ktom_functions.p_k_udpate(prev_p_k: array, p_opk_approx: array, op_choice: int, dilution=None)[source]#

k-ToM updates its estimate of opponents sophistication level. If k-ToM has a dilution parameter, it does a partial forgetting of learned estimates.

Examples

>>> p_k_udpate(prev_p_k = np.array([1.]), p_opk_approx = np.array([-0.69314718]), op_choice = 1, dilution = None)
tomsup.ktom_functions.p_op0_fun(p_op_mean0: float, p_op_var0: float)[source]#

0-ToM combines the mean and variance of its parameter estimate into a final choice probability estimate. To avoid unidentifiability problems this function does not use 0-ToM’s volatility parameter.

Examples: >>> p_op0_fun(p_op_mean0 = 0.7, p_op_var0 = 0.3)

tomsup.ktom_functions.p_op_mean0_update(prev_p_op_mean0: float, p_op_var0: float, op_choice: int)[source]#

0-ToM updates mean choice probability estimate

tomsup.ktom_functions.p_op_var0_update(prev_p_op_mean0: float, prev_p_op_var0: float, volatility: float)[source]#

Variance update of the 0-ToM

Examples

>>> p_op_var0_update(1, 0.2, 1)
0.8348496471878395
>>> #Higher volatility results in a higher variance
>>> p_op_var0_update(1, 0.2, 1) < p_op_var0_update (1, 0.2, 2)
True
>>> #Mean close to 0.5 gives lower variance
>>> p_op_var0_update(1, 0.45, 1) < p_op_var0_update (1, 0.2, 2)
True
tomsup.ktom_functions.p_opk_approx_fun(prev_p_op_mean: array, prev_param_var: array, prev_gradient: array, level: int)[source]#

Approximates the estimated choice probability of the opponent on the previous round. A semi-analytical approximation derived in Daunizeau, J. (2017)

Examples

>>> p_opk_approx_fun(prev_p_op_mean = np.array([0]), prev_param_var = np.array([[0, 0, 0]]), prev_gradient = np.array([[0, 0, 0]]), level = 1)
tomsup.ktom_functions.p_opk_fun(p_op_mean: array, param_var: array, gradient: array)[source]#

k-ToM combines the mean choice probability estimate and the variances of its parameter estimates into a final choice probability estimate. To avoid unidentifiability problems this function does not use 0-ToM’s volatility parameter.

tomsup.ktom_functions.param_mean_update(prev_p_op_mean: array, prev_param_mean: array, prev_gradient: array, p_k: array, param_var, op_choice: int)[source]#

k-ToM updates its estimates of opponent’s parameter values

Examples

>>> param_mean_update(prev_p_op_mean, prev_param_mean = np.array([[0, 0, 0]]), prev_gradient = np.array([0, 0, 0]), p_k = np.array([0, 0, 0]), param_var, op_choice)
tomsup.ktom_functions.param_var_update(prev_p_op_mean: array, prev_param_var: array, prev_gradient: array, p_k: array, volatility: float, volatility_dummy=None, **kwargs)[source]#

k-ToM updates its uncertainty / variance on its estimates of opponent’s parameter values

Examples

>>> param_var_update(prev_param_mean = np.array([[0, 0, 0]]),             prev_param_var = np.array([[0, 0, 0]]),             prev_gradient = np.array([0, 0, 0]), p_k = np.array([1.]),             volatility = -2, volatility_dummy = None)
array([[0.12692801, 0.        , 0.        ]])