researchpilot-data / chunks /1805.11088_semantic.json
Subhadip007's picture
feat: Upload full 358k vector database
e8a2c2e
[
{
"chunk_id": "ca439060-49ec-40ac-ae91-c64472d55be5",
"text": "Deep Reinforcement Learning in Ice Hockey\nfor Context-Aware Player Evaluation Guiliang Liu and Oliver Schulte\nSimon Fraser University, Burnaby, Canada\ngla68@sfu.ca, oschulte@cs.sfu.ca A variety of machine learning models have been2018 proposed to assess the performance of players in\nprofessional sports. However, they have only a limited ability to model how player performance de-Jul pends on the game context. This paper proposes a\n16 newply DeepapproachReinforcementto capturingLearninggame context:(DRL) towelearnapan action-value Q function from 3M play-by-play\nevents in the National Hockey League (NHL). The\nFigure 1: Ice Hockey Rink. Ice hockey is a fast-paced team sport, neural network representation integrates both conwhere two teams of skaters must shoot a puck into their opponent's\ntinuous context signals and game history, using a net to score goals.\npossession-based LSTM. The learned Q-function\nis used to value players' actions under different[cs.LG]\ngame contexts. To assess a player's overall perfor- Recently, Markov models have been used to address these\nmance, we introduce a novel Game Impact Metric limitations. [Routley and Schulte, 2015] used states of a\n(GIM) that aggregates the values of the player's ac- Markov Game Model to capture game context and compute\ntions. Empirical Evaluation shows GIM is consis- a Q function, representing the chance that a team scores the\ntent throughout a play season, and correlates highly next goal, for all actions. [Cervone et al., 2014] applied a\nwith standard success measures and future salary. competing risk framework with Markov chain to model game\ncontext, and developed EPV, a point-wise conditional value\nsimilar to a Q function, for each action . The Q-function con-\n1 Introduction: Valuing Actions and Players cept offers two key advantages for assigning values to actions\nWith the advancement of high frequency optical tracking [Schulte et al., 2017a; Decroos et al., 2018]: 1) All actions are\nand object detection systems, more and larger event stream scored on the same scale by looking ahead to expected outdatasets for sports matches have become available. There comes. 2) Action values reflect the match context in which\nis increasing opportunity for large-scale machine learning to they occur. For example, a late check near the opponent's\nmodel complex sports dynamics. Player evaluation is a ma- goal generates different scoring chances than a check at other\ndraft, sign or trade. Many models have been proposed [But- partial game context in the real sports match, but nonethetrey et al., 2011; Macdonald, 2011; Decroos et al., 2018; less the models assume full observability.",
"paper_id": "1805.11088",
"title": "Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation",
"authors": [
"Guiliang Liu",
"Oliver Schulte"
],
"published_date": "2018-05-26",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11088v3",
"chunk_index": 0,
"total_chunks": 23,
"char_count": 2663,
"word_count": 406,
"chunking_strategy": "semantic"
},
{
"chunk_id": "1a7db438-46c1-4195-b0aa-4cca3313bf50",
"text": "The most common approach has been to discretized input features, which leads to loss of information.\nquantify the value of a player's action, and to evaluate play- In this work, we utilize a deep reinforcement learning (DRL)\ners by the total value of the actions they took [Schuckers and model to learn an action-value Q function for capturing the\nCurro, 2013; McHale et al., 2012]. current match context. The neural network representation\nHowever, traditional sports models assess only the actions can easily incorporate continuous quantities like rink locathat have immediate impact on goals (e.g. shots), but not the tion and game time. To handle partial observability, we introactions that lead up to them (e.g. pass, reception). And action duce a possession-based Long Short Term Memory (LSTM)\nvalues are assigned taking into account only a limited context architecture that takes into account the current play history.\nof the action. But in realistic professional sports, the rele- Unlike most previous work on active reinforcement learnvant context is very complex, including game time, position ing (RL), which aims to compute optimal strategies for comof players, score and manpower differential, etc. plex continuous-flow games [Hausknecht and Stone, 2015; Mnih et al., 2015], we solve a prediction (not control) prob- dynamic game tracking data, we apply Reinforcement Learnlem in the passive learning (on policy) setting [Sutton and ing to estimate the action value function Q(s, a), which asBarto, 1998]. We use RL as a behavioral analytics tool for signs a value to action a given game state s. We define a new\nreal human agents, not to control artificial agents. player evaluation metric called Goal Impact Metric (GIM)\nGiven a Q-function, the impact of an action is the change to value each player, based on the aggregated impact of their\nin Q-value due to the action. Our novel Goal Impact Met- actions, which is defined in Section 6 below.",
"paper_id": "1805.11088",
"title": "Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation",
"authors": [
"Guiliang Liu",
"Oliver Schulte"
],
"published_date": "2018-05-26",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11088v3",
"chunk_index": 2,
"total_chunks": 23,
"char_count": 1957,
"word_count": 314,
"chunking_strategy": "semantic"
},
{
"chunk_id": "6991bbef-0982-499d-955f-80e43c43ccdb",
"text": "Player evaluaric (GIM) aggregates the impact of all actions of a player. tion is a descriptive task rather than a predictive generalizaTo our knowledge, this is the first player evaluation metric tion problem.As game event data does not provide a ground\nbased on DRL. The GIM metric measures both players' of- truth rating of player performance, our experiments assess\nfensive and defensive contribution to goal scoring. For player player evaluation as an unsupervised problem in Section 7.\nevaluation, similar to clustering, ground truth is not available. A common methodology [Routley and Schulte, 2015;\nPettigrew, 2015] is to assess the predictive value of a player\nevaluation metric for standard measures of success. Empirical comparison between 7 player evaluation metrics finds that\n1) given a complete season, GIM correlates the most with 12\nstandard success measures and is the most temporally consistent metric, 2) given partial game information, GIM generalizes best to future salary and season total success. Figure 2: System Flow for Player Evaluation 2 Related Work 4 Play Dynamic in NHL\nWe discuss the previous work most related to our approach. We utilize a dataset constructed by SPORTLOGiQ using\nDeep Reinforcement Learning. Previous DRL work has computer vision techniques. The data provide information\nfocused on control in continuous-flow games, not predic- about game events and player actions for the entire 2015-\ntion [Mnih et al., 2015]. Among these papers, [Hausknecht 2016 NHL (largest professional ice hockey league) season,\nand Stone, 2015] use a very similar network architecture to which contains 3,382,129 events, covering 30 teams, 1140\nours, but with a fixed trace length parameter rather than our games and 2,233 players. Table 1 shows an excerpt. The data\npossession-based method. Hausknecht and Stone find that for track events around the puck, and record the identity and acpartially observable control problems, the LSTM mechanism tions of the player in possession, with space and time stamps,\noutperforms a memory window. Our study confirms this find- and features of the game context.",
"paper_id": "1805.11088",
"title": "Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation",
"authors": [
"Guiliang Liu",
"Oliver Schulte"
],
"published_date": "2018-05-26",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11088v3",
"chunk_index": 3,
"total_chunks": 23,
"char_count": 2124,
"word_count": 329,
"chunking_strategy": "semantic"
},
{
"chunk_id": "bc7c8be6-3463-4864-9c26-a0fc912e6605",
"text": "The table utilizes adjusted\ning in an on policy prediction problem. spatial coordinates where negative numbers refer to the dePlayer Evaluation. Albert et al. 2017 provide several up- fensive zone of the acting player, positive numbers to his ofto-date survey articles about evaluating players. A funda- fensive zone. Adjusted X-coordinates run from -100 to +100,\nmental difficulty for action value counts in continuous-flow Y-coordinates from 42.5 to -42.5, and the origin is at the ice\ngames is that they traditionally have been restricted to goals center as in Figure 1. We augment the data with derived feaand actions immediately related to goals (e.g. shots). The tures in Table 2 and list the complete feature set in Table 3.",
"paper_id": "1805.11088",
"title": "Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation",
"authors": [
"Guiliang Liu",
"Oliver Schulte"
],
"published_date": "2018-05-26",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11088v3",
"chunk_index": 4,
"total_chunks": 23,
"char_count": 731,
"word_count": 118,
"chunking_strategy": "semantic"
},
{
"chunk_id": "b696c694-0bef-4aa0-bc43-de527eff88a1",
"text": "Q-function solves this problem by using lookahead to assign We apply the Markov Game framework [Littman, 1994]\nvalues to all actions. to learn an action value function for NHL play. Our notaPlayer Evaluation with Reinforcement Learning. Using tion for RL concepts follows [Mnih et al., 2015]. There are\nthe Q-function to evaluate players is a recent development two agents Home resp. Away representing the home resp.\n[Schulte et al., 2017a; Cervone et al., 2014; Routley and away team. The reward, represented by goal vector gt is\nSchulte, 2015]. Schulte et al. discretized location and time a 1-of-3 indicator vector that specifies which team scores\ncoordinates and applied dynamic programming to learn a Q- (Home, Away, Neither).",
"paper_id": "1805.11088",
"title": "Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation",
"authors": [
"Guiliang Liu",
"Oliver Schulte"
],
"published_date": "2018-05-26",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11088v3",
"chunk_index": 5,
"total_chunks": 23,
"char_count": 731,
"word_count": 116,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a8a9412b-2bbb-4bbf-b166-0b36b2c43468",
"text": "An action at is one of 13 types,\nfunction. Discretization leads to loss of information, unde- including shot, block, assist, etc., together with a mark that\nsirable spatio-temporal discontinuities in the Q-function, and specifies the team executing the action, e.g. Shot(Home).\ngeneralizes poorly to unobserved parts of the state space. For An observation is a feature vector xt for discrete time step\nbasketball, Cervone et al. defined a player performance met- t that specifies a value for the 10 features listed in Table 3.\nric based on an expected point value model that is equivalent We use the complete sequence st ≑(xt, atβˆ’1, xtβˆ’1, . . . , x0)\nto a Q-function. Their approach assumes complete observ- as the state representation at time step t [Mnih et al., 2015],\nability (of all players at all times), while our data provide par- which satisfies the Markov property.\ntial observability only. We divide NHL games into goal-scoring episodes, so that\neach episode 1) begins at the beginning of the game, or\nimmediately after a goal, and 2) terminates with a goal or3 Task Formulation and Approach\nthe end of the game.",
"paper_id": "1805.11088",
"title": "Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation",
"authors": [
"Guiliang Liu",
"Oliver Schulte"
],
"published_date": "2018-05-26",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11088v3",
"chunk_index": 6,
"total_chunks": 23,
"char_count": 1123,
"word_count": 189,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5a03b635-b699-4e02-8c67-1183ebf0abe7",
"text": "A Q function represents the condiPlayer evaluation (the \"Moneyball\" problem) is one of the tional probability of the event that the home resp. away team\nmost studied tasks in sports analytics. Players are rated by scores the goal at the end of the current episode (denoted\ntheir observed performance over a set of games. Our ap- goalHome = 1 resp. goalAway = 1), or neither team does\nproach to evaluating players is illustrated in Figure 2. Given (denoted goalNeither = 1): GID=GameId, PID=playerId, GT=GameTime, TID=TeamId, MP=Manpower, GD=Goal Difference, OC = Outcome, S=Succeed, F=Fail, P\n= Team Possess puck, H=Home, A=Away, H/A=Team who performs action, TR = Time Remain, PN = Play Number, D = Duration GID PID GT TID X Y MP GD Action OC P Velocity TR D Angle H/A PN\n1365 126 14.3 6 -11.0 25.5 Even 0 Lpr S A (-23.4, 1.5) 3585.7 3.4 0.250 A 4\n1365 126 17.5 6 -23.5 -36.5 Even 0 Carry S A (-4.0, -3.5) 3582.5 3.1 0.314 A 4\n1365 270 17.8 23 14.5 35.5 Even 0 Block S A (-27.0, -3.0) 3582.2 0.3 0.445 H 4\n1365 126 17.8 6 -18.5 -37.0 Even 0 Pass F A (0, 0) 3582.2 0.0 0.331 A 4\n1365 609 19.3 23 -28.0 25.5 Even 0 Lpr S H (-30.3, -7.5) 3580.6 1.5 0.214 H 5\n1365 609 19.3 23 -28.0 25.5 Even 0 Pass S H (0,0) 3580.6 0.0 0.214 H 5 Table 1: Dataset Example Table 2: Derived Features Name Type Range\nX Coordinate of Puck Continuous [-100, 100]\nY Coordinate of Puck Continuous [-42.5, 42.5]\nVelocity of Puck Continuous (-inf, +inf)\nGame Time Remain Continuous [0, 3600]\nScore Differential Discrete (-inf, +inf)\nManpower Situation Discrete {EV, SH, PP}\nEvent Duration Continuous [0, +inf)\nAction Outcome Discrete {successful, failure}\nAngle between puck and goal Continuous [βˆ’3.14, 3.14]\nHome or Away Team Discrete {Home, Away} Table 3: Complete Feature List Qteam(s, a) = P(goalteam = 1|st = s, at = a) where team is a placeholder for one of Figure 3: Our design is a 5-layer network with 3 hidden layers. This Q-function represents the Each hidden layer contains 1000 nodes, which utilize a relu activation function. The first hidden layer is the LSTM layer, the re-probability that a team scores the next goal, given current\nmaining layers are fully connected. Temporal-difference learning\nplay dynamics in the NHL (cf. Schulte et al.; Routley\nlooks ahead to the next goal, and the LSTM memory traces back to\nand Schulte).",
"paper_id": "1805.11088",
"title": "Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation",
"authors": [
"Guiliang Liu",
"Oliver Schulte"
],
"published_date": "2018-05-26",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11088v3",
"chunk_index": 7,
"total_chunks": 23,
"char_count": 2318,
"word_count": 415,
"chunking_strategy": "semantic"
},
{
"chunk_id": "c7f59c78-9508-4ca5-80a3-18277b2f7edb",
"text": "Different Q-functions for different expected the beginning of the play (the last possession change).\noutcomes have been used to capture different aspects of\nNHL play dynamics, such as match win [Pettigrew, 2015;\nKaplan et al., 2014; Routley and Schulte, 2015] and penalties 5.1 Network Architecture\n[Routley and Schulte, 2015]. For player evaluation, the\nnext-goal Q function has three advantages. 1) The next-goal Figure 3 shows our model structure. Three output nodes\nreward captures what a coach expects from a player. For represent the estimates Λ†QHome(s, a), Λ†QAway(s, a) and\nexample, if a team is ahead by two goals with one minute Λ†QNeither(s, a). Output values are normalized to probabilities.left in the match, a player's actions have negligible effect\nThe Λ†Q-functions for each team share weights. The networkon final match outcome. Nonetheless professionals should\narchitecture is a Dynamic LSTM that takes as inputs a currentkeep playing as well as they can and maximize the scoring\nsequence st, an action at and a dynamic trace length tlt.1chances for their own team. 2) The Q-values are easy to\ninterpret, since they model the probability of an event that\nis a relatively short time away (compared to final match 5.2 Weight Training\noutcome). 3) Increasing the probability that a player's team\nscores the next goal captures both offensive and defensive We apply an on-policy Temporal Difference (TD) predicvalue. For example, a defensive action like blocking a shot tion method Sarsa [Sutton and Barto, 1998, Ch.6.4], to esdecreases the probability that the other team will score the timate Qteam(s, a) for the NLH play dynamics observed in\nnext goal, thereby increasing the probability that the player's our dataset. Weights ΞΈ are optimized by minibatch gradient\nown team will score the next goal. descent via backpropagation. We used batch size 32 (determined experimentally). The Sarsa gradient descent update at\ntime step t is based on a squared-error loss function:\n5 Learning Q values with DP-LSTM Sarsa We take a function approximation approach and learn a neural 1We experimented with a single-hidden layer, but weight training\nnetwork that represents the Q-function (Qteam(s, a)). failed to converge.",
"paper_id": "1805.11088",
"title": "Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation",
"authors": [
"Guiliang Liu",
"Oliver Schulte"
],
"published_date": "2018-05-26",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11088v3",
"chunk_index": 8,
"total_chunks": 23,
"char_count": 2223,
"word_count": 349,
"chunking_strategy": "semantic"
},
{
"chunk_id": "045ba535-fcec-4313-953b-54fb58203d20",
"text": "6 Player Evaluation\nIn this section, we define our novel Goal Impact Metric and\ngive an example player ranking. 6.1 Player Evaluation Metric\nOur Q-function concept provides a novel AI-based definition for assigning a value to an action. Like [Schulte et al.,\n2017b], we measure the quality of an action by how much it\nchanges the expected return of a player's team. Whereas the\nscoring chance at a time measures the value of a state, and\ntherefore depends on the previous efforts of the entire team,\nthe change in value measures directly the impact of an action\nby a specific player. In terms of the Q-function, this is the\nchange in Q-value due to a player's action. This quantity is\ndefined as the action's impact. The impact can be visualized\nas the difference between successive points in the Q-value\nticker (Figure 4). For our specific choice of Next Goal as the\nreward function, we refer to goal impact. The total impact\nof a player's actions is his Goal Impact Metric (GIM).",
"paper_id": "1805.11088",
"title": "Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation",
"authors": [
"Guiliang Liu",
"Oliver Schulte"
],
"published_date": "2018-05-26",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11088v3",
"chunk_index": 9,
"total_chunks": 23,
"char_count": 981,
"word_count": 171,
"chunking_strategy": "semantic"
},
{
"chunk_id": "37508eb3-2d3d-48c6-b6b6-ead4b619d6da",
"text": "The\nformal equations are: Figure 4: Temporal Projection of the method. For each team, and\neach game time, the graph shows the chance the that team scores impactteam(st, at) = Qteam(st, at) βˆ’Qteam(stβˆ’1, atβˆ’1)\nthe next goal, as estimated by the model. Major events lead to major\nchanges in scoring chances, as annotated. The network also captures GIM i(D) = X niD(s, a) Γ— impactteami(s, a)\nsmaller changes associated with every action under different game s,a\ncontexts.\nwhere D indicates our dataset, teami denotes the team of\nplayer i, and niD(s, a) is the number of times that player i\nwas observed to perform action a at s. Because it is the sum\nof differences between subsequent Q values, the GIM metric\nLt(ΞΈt) = E[(gt + Λ†Q(st+1, at+1, ΞΈt) βˆ’Λ†Q(st, at, ΞΈt))2] inherits context-sensitivity from the Q function.\nΞΈt+1 = ΞΈt + Ξ±βˆ‡ΞΈL(ΞΈt)\n6.2 Rank Players with GIM\nwhere g and Λ†Q are for a single team. LSTM training re- Table 4 lists the top-20 highest impacts players, with basic\nquires setting a trace length tlt parameter. This key param- statistics. All these players are well-known NHL stars. Tayeter controls how far back in time the LSTM propagates the lor Hall tops the ranking although he did not score the most\nerror signal from the current time at the input history. This shows how our ranking, while correlated with\nsports like Ice Hockey show a turn-taking aspect where one goals, also reflects the value of other actions by the player.\nteam is on the offensive and the other defends; one such turn For instance, we find that the total number of passes peris called a play. We set tlt to the number of time steps from formed by Taylor Hall is exceptionally high at 320. Our metcurrent time t to the beginning of the current play (with a ric can be used to identify undervalued players. For instance,\nmaximum of 10 steps). A play ends when the possession Johnny Gaudreau and Mark Scheifele drew salaries below\nof puck changes from one team to another.",
"paper_id": "1805.11088",
"title": "Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation",
"authors": [
"Guiliang Liu",
"Oliver Schulte"
],
"published_date": "2018-05-26",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11088v3",
"chunk_index": 10,
"total_chunks": 23,
"char_count": 1957,
"word_count": 344,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5c78768c-420a-475f-9160-7afbf15856d6",
"text": "Later they received a\nsion changes as break points for temporal models is common $5M+ contract for the 2016-17 season.\nin several continuous-flow sports, especially basketball [Cervone et al., 2014; Omidiran, 2011]. We apply Tensorflow to\n7 Empirical Evaluationimplement training; our source code is published on-line.2\nIllustration of Temporal Projection. Figure 4 shows a value We describe our comparison methods and evaluation methodticker [Decroos et al., 2017; Cervone et al., 2014] that repre- ology. Similar to clustering problems, there is no ground truth\nsents the evolution of the Q function from the 3rd period of for the task of player evaluation. To assess a player evaluation\na match between the Blue Jackets (Home team) and the Pen- metric, we follow previous work [Routley and Schulte, 2015;\nguins (Away team), Nov. 17, 2015. The figure plots values Pettigrew, 2015] and compute its correlation with statistics\nof the three output nodes.",
"paper_id": "1805.11088",
"title": "Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation",
"authors": [
"Guiliang Liu",
"Oliver Schulte"
],
"published_date": "2018-05-26",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11088v3",
"chunk_index": 12,
"total_chunks": 23,
"char_count": 953,
"word_count": 149,
"chunking_strategy": "semantic"
},
{
"chunk_id": "3999964d-a870-47ed-b168-efb58261e58b",
"text": "We highlight critical events and that directly measure success like Goals, Assists, Points, Play\nmatch contexts to show the context-sensitivity of the Q func- Time (Section 7.2). There are two justifications for compartion. High scoring probabilities for one team decrease those ing with success measures. (1) These statistics are generally\nof its opponent. The probability that neither team scores rises recognized as important measures of a player's strength, besignificantly at the end of the match. cause they indicate the player's ability to contribute to gamechanging events. So a comprehensive performance metric\n2https://github.com/Guiliang/DRL-ice-hockey ought to be related to them. (2) The success measures are Name GIM Assists Goals Points Team Salary the model has been trained off-line, the GIM metric can be\nTaylor Hall 96.40 39 26 65 EDM $6,000,000\nJoe Pavelski 94.56 40 38 78 SJS $6,000,000 computed quickly with a single pass over the data. Johnny Gaudreau 94.51 48 30 78 CGY $925,000 Significance Test. To assess whether GIM is significantly\nAnze Kopitar 94.10 49 25 74 LAK $7,700,000 different from the other player evaluation metrics, we per- Erik Karlsson 92.41 66 16 82 OTT $7,000,000\nPatrice Bergeron 92.06 36 32 68 BOS $8,750,000 form paired t-tests over all players. The null hypothesis is\nMark Scheifele 90.67 32 29 61 WPG $832,500 rejected with respective p-values: 1.1 βˆ—10βˆ’186, 7.6 βˆ—10βˆ’204,\nSidney Crosby 90.21 49 36 85 PIT $12,000,000\nClaude Giroux 89.64 45 22 67 PHI $9,000,000 8 βˆ—10βˆ’218, 3.9 βˆ—10βˆ’181, 4.7 βˆ—10βˆ’201 and 1.3 βˆ—10βˆ’05 for\nDustin Byfuglien 89.46 34 19 53 WPG $6,000,000 PlusMinus, GAR, WAR, EG, SI and GIM-T1, which shows\nJamie Benn 88.38 48 41 89 DAL $5,750,000 that GIM values are very different from other metrics' values. Patrick Kane 87.81 60 46 106 CHI $13,800,000\nMark Stone 86.42 38 23 61 OTT $2,250,000\nBlake Wheeler 85.83 52 26 78 WPG $5,800,000 7.2 Season Totals: Correlations with standard\nTyler Toffoli 83.25 27 31 58 DAL $2,600,000 Success Measures Charlie Coyle 81.50 21 21 42 MIN $1,900,000\nTyson Barrie 81.46 36 13 49 COL $3,200,000 In the following experiment, we compute the correlation beJonathan Toews 80.92 30 28 58 CHI $13,800,000\nSean Monahan 80.92 36 27 63 CGY $925,000 tween player ranking metrics and success measures over the\nVladimir Tarasenko 80.68 34 40 74 STL $8,000,000 entire season. Table 5 shows the correlation coefficients of\nthe comparison methods with 14 standard success measures:\nTable 4: 2015-2016 Top-20 Player Impact Scores Assist, Goal, Game Wining Goal (GWG), Overtime Goal\n(OTG), Short-handed Goal (SHG), Power-play Goal (PPG),\nShots (S), Point, Short-handed Point (SHP), Power-play Point\noften forecasting targets for hockey stakeholders, so a good (PPP), Face-off Win Percentage (FOW), Points Per Game\nplayer evaluation metric should have predictive value for (P/GP), Time On Ice (TOI) and Penalty Minute (PIM). For example, teams would want to know how many are all commonly used measures available from the NHL ofpoints an offensive player will contribute. To evaluate the ficial website (www.nhl.com/stats/player).",
"paper_id": "1805.11088",
"title": "Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation",
"authors": [
"Guiliang Liu",
"Oliver Schulte"
],
"published_date": "2018-05-26",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11088v3",
"chunk_index": 13,
"total_chunks": 23,
"char_count": 3108,
"word_count": 490,
"chunking_strategy": "semantic"
},
{
"chunk_id": "7cd5e0a7-1f48-4899-a03c-fcdd0556e6c0",
"text": "GIM achieves the\nability of the GIM metric for generalizing from past perfor- highest correlation in 12 out of 14 success measures. For\nmance to future success, we report two measurements: How the remaining two (TOI and PIM), GIM is comparable to\nwell the GIM metric predicts a total season success measure the highest. Together, the Q-based metrics GIM, GIM-1 and\nfrom a sample of matches only (Section 7.3), and how well SI show the highest correlations with success measures. EG\nthe GIM metric predicts the future salary of a player in subse- is only the fourth best metric, because it considers only the\nquent seasons (Section 7.4). Mapping performance to salaries expected value of shots without look-ahead. The traditional\nis a practically important task because it provides an objective sports analytics metrics correlate poorly with almost all sucstandard to guide players and teams in salary negotiations [Id- cess measures.",
"paper_id": "1805.11088",
"title": "Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation",
"authors": [
"Guiliang Liu",
"Oliver Schulte"
],
"published_date": "2018-05-26",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11088v3",
"chunk_index": 14,
"total_chunks": 23,
"char_count": 933,
"word_count": 150,
"chunking_strategy": "semantic"
},
{
"chunk_id": "03d5aac5-a499-45c3-b75b-20528d8e3d49",
"text": "This is evidence that AI techniques that proson and Kahane, 2000]. vide fine-grained expected action value estimates lead to better performance metrics. With the neural network model,\n7.1 Comparison Player Evaluation Metrics GIM can handle continuous input without pre-discretization. We compare GIM with the following player evaluation met- This prevents the loss of game context information and exrics to show the advantage of 1) modeling game context 2) plains why both GIM and GIM-T1 performs better than SI in\nincorporating continuous context signal 3) including history. most success measures. And the higher correlation of GIM\nOur first baseline method Plus-Minus (+/-) is a commonly compared to GIM-T1 also demonstrates the value of game\nused metric that measures how the presence of a player in- history. In terms of absolute correlations, GIM achieves high\nfluences the goals of his team [Macdonald, 2011]. The sec- values, except for the very rare events OTG, SHG, SHP and\nond baseline method Goal-Above-Replacement (GAR) esti- FOW. Another exception is Penalty Minutes (PIM), which inmates the difference of team's scoring chances when the tar- terestingly, show positive correlation with all player evaluaget player plays, vs. replacing him or her with an average tion metrics, although penalties are undesirable. We hypothplayer [Gerstenberg et al., 2014]. Win-Above-Replacement esize that better players are more likely to receive penalties,\n(WAR), our third baseline method, is the same as GAR but because they play more often and more aggressively.\nfor winning chances [Gerstenberg et al., 2014]. Our fourth\n7.3 Round-by-Round Correlations: Predictingbaseline method Expected Goal (EG) weights each shot\nby the chance of it leading to a goal. These four meth- Future Performance From Past Performance\nods consider only very limited game context.",
"paper_id": "1805.11088",
"title": "Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation",
"authors": [
"Guiliang Liu",
"Oliver Schulte"
],
"published_date": "2018-05-26",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11088v3",
"chunk_index": 15,
"total_chunks": 23,
"char_count": 1862,
"word_count": 282,
"chunking_strategy": "semantic"
},
{
"chunk_id": "e749b6e8-db7a-4be4-9267-17a1a3a0c90b",
"text": "The last base- A sports season is commonly divided into rounds. In round\nline method Scoring Impact (SI) is the most similar method n, a team or player has finished n games in a season. For\nto GIM based on Q-values. But Q-values are learned with a given performance metric, we measure the correlation bepre-discretized spatial regions and game time [Schulte et al., tween (i) its value computed over the first n rounds, and (ii)\n2017a]. As a lesion method, we include GIM-T1, where we the value of the three main success measures, assists, goals,\nset the maximum trace length of LSTM to 1 (instead of 10) and points, computed over the entire season. This allows\nin computing GIM. This comparison assesses the importance us to assess how quickly different metrics acquire predictive\nof including enough history information. power for the final season total, so that future performance\nComputing Cost. Compared to traditional metrics like +/-, can be predicted from past performance. We also evaluate\nlearning a Q-function is computationally demanding (over 5 the auto-correlation of a metric's round-by-round total with\nmillion gradient descent steps on our dataset).",
"paper_id": "1805.11088",
"title": "Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation",
"authors": [
"Guiliang Liu",
"Oliver Schulte"
],
"published_date": "2018-05-26",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11088v3",
"chunk_index": 16,
"total_chunks": 23,
"char_count": 1166,
"word_count": 188,
"chunking_strategy": "semantic"
},
{
"chunk_id": "f166710b-65ae-46a7-86fb-db05ab7f67b1",
"text": "However, after its own season total. The auto-correlation is a measure of methods Assist Goal GWG OTG SHG PPG S\n+/- 0.236 0.204 0.217 0.16 0.095 0.099 0.118\nGAR 0.527 0.633 0.552 0.324 0.191 0.583 0.549\nWAR 0.516 0.652 0.551 0.332 0.192 0.564 0.532\nEG 0.783 0.834 0.704 0.448 0.249 0.684 0.891\nSI 0.869 0.745 0.631 0.411 0.27 0.591 0.898\nGIM-T1 0.873 0.752 0.682 0.428 0.291 0.607 0.877\nGIM 0.875 0.878 0.751 0.465 0.345 0.71 0.912 methods Point SHP PPP FOW P/GP TOI PIM\n+/- 0.237 0.159 0.089 -0.045 0.238 0.141 0.049\nGAR 0.622 0.226 0.532 0.16 0.616 0.323 0.089\nWAR 0.612 0.235 0.531 0.153 0.605 0.331 0.078\nEG 0.854 0.287 0.729 0.28 0.702 0.722 0.354\nSI 0.869 0.37 0.707 0.185 0.655 0.955 0.492\nGIM-T1 0.902 0.384 0.736 0.288 0.738 0.777 0.347\nGIM 0.93 0.399 0.774 0.295 0.749 0.835 0.405 Table 5: Correlation with standard success measures. temporal consistency, which is a desirable feature [Pettigrew,\n2015], because generally the skill of a player does not change\ngreatly throughout a season. Therefore a good performance\nmetric should show temporal consistency. We focused on the expected value metrics EG, SI, GIM-T1\nFigure 5: Correlations between round-by-round metrics and season\nand GIM, which had the highest correlations with success in totals. Figure 5 shows metrics' round-by-round correlation\ncoefficients with assists, goals, and points. The bottom right\nmethods 2016 to 2017 Season 2017 to 2018 Season\nshows the auto-correlation of a metric's round-by-round total Plus Minus 0.177 0.225\nwith its own season total. GIM is the most stable metric as GAR 0.328 0.372\nmeasured by auto-correlation: after half the season, the cor- WAR 0.328 0.372\nrelation between the round-by-round GIM and the final GIM EG 0.587 0.6 SI 0.609 0.668\nis already above 0.9. GIM-T1 0.596 0.69\nWe find both GIM and GIM-T1 eventually dominate GIM 0.666 0.763\nthe predictive value of the other metrics, which shows the\nTable 6: Correlation with Players' Contractadvantages of modeling sports game context without prediscretization. And possession-based GIM also dominates\nGIM-T1 after the first season half, which shows the value of players in the right bottom part, with high GIM but low salary\nincluding play history in the game context.",
"paper_id": "1805.11088",
"title": "Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation",
"authors": [
"Guiliang Liu",
"Oliver Schulte"
],
"published_date": "2018-05-26",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11088v3",
"chunk_index": 17,
"total_chunks": 23,
"char_count": 2228,
"word_count": 362,
"chunking_strategy": "semantic"
},
{
"chunk_id": "f05f529b-dbf8-4aab-81bc-2e13ef70d145",
"text": "But how quickly in their new contract. It is interesting that the percentage of\nand how much the GIM metrics improve depends on the players who are undervalued in their new contract decreases\nspecific success measure. For instance, in Figure 5, GIM's in the next season (from 32/258 in 2016-17 season to 8/125\nround-by-round correlation with Goal (top right graph) dom- in 2017-2018 season). This suggests that GIM provides an\ninates by round 10, while others require a longer time. early signal of a player's value after one season, while it often\ntakes teams an additional season to recognize performance\n7.4 Future Seasons: Predicting Players' Salary enough to award a higher salary. In professional sports, a team will give a comprehensive evaluation to players before deciding their contract. The more 8 Conclusion and Future Work\nvalue players provide, the larger contract they will get. Accordingly, a good performance metric should be positively re- We investigated Deep Reinforcement Learning (DRL) for\nlated to the amount of players' future contract. The NHL reg- professional sports analytics. We applied DRL to learn\nulates when players can renegotiate their contracts, so we fo- complex spatio-temporal NHL dynamics. The trained neural\ncus on players receiving a new contract following the games network provides a rich source of knowledge about how a\nin our dataset (2015-2016 season). team's chance of scoring the next goal depends on the match\nTable 6 shows the metrics' correlations with the amount context. Based on the learned action values, we developed\nof players' contract over all the players who obtained a new an innovative context-aware performance metric GIM that\ncontract during the 2016-17 and 2017-18 NHL seasons. Our provides a comprehensive evaluation of NHL players, taking\nGIM score achieves the highest correlation in both seasons. into account all of their actions. In our experiments, GIM had\nThis means that the metric can serve as an objective basis for the highest correlation with most standard success measures,\ncontract negotiations. The scatter plots of Figure 6 illustrate was the most temporally consistent metric, and generalized\nGIM's correlation with amount of players' future contract. best to players' future salary. Our approach applies to similar\nIn the 2016-17 season (left), we find many underestimated continuous-flow sports games with rich game contexts,",
"paper_id": "1805.11088",
"title": "Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation",
"authors": [
"Guiliang Liu",
"Oliver Schulte"
],
"published_date": "2018-05-26",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11088v3",
"chunk_index": 18,
"total_chunks": 23,
"char_count": 2411,
"word_count": 376,
"chunking_strategy": "semantic"
},
{
"chunk_id": "e34447a4-8270-41c5-85be-a535a1afe1e3",
"text": "[Idson and Kahane, 2000] Todd L Idson and Leo H Kahane. Team effects on compensation: an application to salary\ndetermination in the National Hockey League. Economic\nInquiry, 38(2):345–357, 2000.\n[Kaplan et al., 2014] Edward H Kaplan, Kevin Mongeon,\nand John T Ryan. A Markov model for hockey: Manpower differential and win probability added. INFOR: Information Systems and Operational Research, 52(2):39–\n50, 2014.\n[Littman, 1994] Michael L Littman. Markov games as a\nframework for multi-agent reinforcement learning. In ProFigure 6: Player GIM vs. Value of new contracts in the 2016-17 ceedings International Conference on Machine Learning,\n(left) and 2017-18 (right) NHL season. volume 157, pages 157–163, 1994.\n[Macdonald, 2011] Brian Macdonald. A regression-based\nadjusted plus-minus statistic for NHL players. Journal oflike soccer and basketball. A limitation of our approach is\nQuantitative Analysis in Sports, 7(3):29, 2011.that players get credit only for recorded individual actions. An influential approach to extend credit to all players on [McHale et al., 2012] Ian G McHale, Philip A Scarf, and\nthe rink has been based on regression [Macdonald, 2011; David E Folker. On the development of a soccer\nThomas et al., 2013]. A promising direction for future work player performance rating system for the English Premier\nis to combine Q-values with regression. Interfaces, 42(4):339–351, 2012.\n[Mnih et al., 2015] Volodymyr Mnih, Koray Kavukcuoglu,\nDavid Silver, et al.",
"paper_id": "1805.11088",
"title": "Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation",
"authors": [
"Guiliang Liu",
"Oliver Schulte"
],
"published_date": "2018-05-26",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11088v3",
"chunk_index": 19,
"total_chunks": 23,
"char_count": 1477,
"word_count": 221,
"chunking_strategy": "semantic"
},
{
"chunk_id": "e831b34b-0304-449d-98b4-15e05ed332d6",
"text": "Human-level control through deep reAcknowledgements inforcement learning. Nature, 518(7540):529–533, 2015.",
"paper_id": "1805.11088",
"title": "Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation",
"authors": [
"Guiliang Liu",
"Oliver Schulte"
],
"published_date": "2018-05-26",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11088v3",
"chunk_index": 20,
"total_chunks": 23,
"char_count": 106,
"word_count": 10,
"chunking_strategy": "semantic"
},
{
"chunk_id": "7ff47275-9c92-48ff-9753-47ac4d833919",
"text": "This work was supported by an Engage Grant from the Na- [Omidiran, 2011] Dapo Omidiran. A new look at adjusted\ntional Sciences and Engineering Council of Canada, and a plus/minus for basketball analysis. In MIT Sloan Sports\nGPU donation from NVIDIA Corporation.",
"paper_id": "1805.11088",
"title": "Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation",
"authors": [
"Guiliang Liu",
"Oliver Schulte"
],
"published_date": "2018-05-26",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11088v3",
"chunk_index": 21,
"total_chunks": 23,
"char_count": 261,
"word_count": 42,
"chunking_strategy": "semantic"
},
{
"chunk_id": "c5547fbd-46e6-4298-a276-32d35ff3e1a4",
"text": "Analytics Conference [online], 2011.\n[Pettigrew, 2015] Stephen Pettigrew. Assessing the offensive\nReferences productivity of NHL players using in-game win probabili-\n[Albert et al., 2017] Jim Albert, Mark E Glickman, Tim B ties. In MIT Sloan Sports Analytics Conference, 2015.",
"paper_id": "1805.11088",
"title": "Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation",
"authors": [
"Guiliang Liu",
"Oliver Schulte"
],
"published_date": "2018-05-26",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11088v3",
"chunk_index": 22,
"total_chunks": 23,
"char_count": 276,
"word_count": 39,
"chunking_strategy": "semantic"
},
{
"chunk_id": "bde947c2-acb0-4818-a72b-b770eb6553b1",
"text": "Swartz, and Ruud H Koning. Handbook of Statistical [Routley and Schulte, 2015] Kurt Routley and Oliver\nMethods and Analyses in Sports. A Markov game model for valuing player actions\n[Buttrey et al., 2011] Samuel Buttrey, Alan Washburn, and in ice hockey. In Proceedings Uncertainty in Artificial\nWilson Price. Estimating NHL scoring rates. Journal of Intelligence (UAI), pages 782–791, 2015. Quantitative Analysis in Sports, 7(3), 2011. [Schuckers and Curro, 2013] Michael Schuckers and James\n[Cervone et al., 2014] Dan Cervone, Alexander D'Amour, Curro. Total hockey rating (THoR): A comprehensive\nLuke Bornn, and Kirk Goldsberry. Pointwise: Predicting statistical rating of national hockey league forwards and\npoints and valuing decisions in real time with NBA optical defensemen based upon all on-ice events. In MIT Sloan\ntracking data. In MIT Sloan Sports Analytics Conference, Sports Analytics Conference, 2013.\n2014. [Schulte et al., 2017a] Oliver Schulte, Mahmoud Khademi,\n[Decroos et al., 2017] Tom Decroos, Vladimir Dzyuba, Sajjad Gholami, et al. A Markov game model for valuing\nactions, locations, and team performance in ice hockey. Jan Van Haaren, and Jesse Davis.",
"paper_id": "1805.11088",
"title": "Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation",
"authors": [
"Guiliang Liu",
"Oliver Schulte"
],
"published_date": "2018-05-26",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11088v3",
"chunk_index": 23,
"total_chunks": 23,
"char_count": 1176,
"word_count": 175,
"chunking_strategy": "semantic"
},
{
"chunk_id": "b1f25c7a-7319-493b-9928-c6e6a1d90aeb",
"text": "Predicting soccer highData Mining and Knowledge Discovery, pages 1–23, lights from spatio-temporal match event streams. In AAAI\n2017, pages 1302–1308, 2017. 2017.\n[Schulte et al., 2017b] Oliver Schulte, Zeyu Zhao, Mehrsan[Decroos et al., 2018] Tom Decroos, Lotte Bransen, Jan\nJavan, and Philippe Desaulniers. Apples-to-apples: Clus- Van Haaren, and Jesse Davis. Actions speak louder than\ntering and ranking NHL players using location informa- goals: Valuing player actions in soccer. arXiv preprint\nConference, 2017.\n[Gerstenberg et al., 2014] Tobias Gerstenberg, Tomer Ull-\n[Sutton and Barto, 1998] Richard S Sutton and Andrew G man, Max Kleiman-Weiner, David Lagnado, and Josh\nBarto. Introduction to reinforcement learning, volume Tenenbaum. Wins above replacement: Responsibility at-\n135. MIT Press Cambridge, 1998. tributions as counterfactual replacements. In Proceedings\nof the Cognitive Science Society, volume 36, 2014. [Thomas et al., 2013] A.C. Competing process hazard function models[Hausknecht and Stone, 2015] Matthew Hausknecht and Pefor player ratings in ice hockey. The Annals of Applied ter Stone. Deep recurrent Q-learning for partially observStatistics, 7(3):1497–1524, 2013. able MDPs. CoRR, abs/1507.06527, 2015.",
"paper_id": "1805.11088",
"title": "Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation",
"authors": [
"Guiliang Liu",
"Oliver Schulte"
],
"published_date": "2018-05-26",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.11088v3",
"chunk_index": 24,
"total_chunks": 23,
"char_count": 1234,
"word_count": 169,
"chunking_strategy": "semantic"
}
]