| [ |
| { |
| "chunk_id": "32a6c22c-372d-406f-b562-76f773651eb5", |
| "text": "Janardhan Rao Doppa DOPPA@EECS.OREGONSTATE.EDU\nAlan Fern AFERN@EECS.OREGONSTATE.EDU\nPrasad Tadepalli TADEPALL@EECS.OREGONSTATE.EDU\nSchool of Electrical Engineering and Computer Science, Oregon State University, Corvallis, OR 97331, USA Abstract tured input, a state-based search strategy (e.g. best-first or\ngreedy search), guided by a learned cost function, is used We consider a framework for structured predicto explore the space of outputs for a specified time bound. tion based on search in the space of complete\nThe least cost output uncovered by the search is then re- structured outputs. Given a structured input, an\nturned as the prediction. output is produced by running a time-bounded\nsearch procedure guided by a learned cost func- The effectiveness of our approach depends critically on:\ntion, and then returning the least cost output un- 1) The identification of an effective combination of search\ncovered during the search. This framework can space and search strategy over structured outputs, and 2)\nbe instantiated for a wide range of search spaces Our ability to learn a cost function for effectively guiding\nand search procedures, and easily incorporates the search for high quality outputs. The main contribution\narbitrary structured-prediction loss functions.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 1, |
| "total_chunks": 29, |
| "char_count": 1280, |
| "word_count": 187, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "454ad368-4d6b-46b0-8cdb-e5b798d61ddd", |
| "text": "In of our work is to provide generic solutions to these two\nthis paper, we make two main technical contri- issues. First, we describe the limited-discrepancy search\nbutions. First, we define the limited-discrepancy space, as a generic search space over complete outputs that\nsearch space over structured outputs, which is can be customized to a particular problem by leveraging\nable to leverage powerful classification learning the power of (non-structured) classification learning algoalgorithms to improve the search space quality. rithms. Second, we give a generic cost function learnSecond, we give a generic cost function learning ing algorithm that can be instantiated for a wide class of\napproach, where the key idea is to learn a cost \"ranking-based search strategies.\" The key idea is to learn\nfunction that attempts to mimic the behavior of a cost function that allows for imitating the search behavconducting searches guided by the true loss func- ior of the algorithm when guided by the true loss function.\ntion. Our experiments on six benchmark domains We also provide experimental results for our approach on a\ndemonstrate that using our framework with only number of benchmark problems and show that even when\na small amount of search is sufficient for signif- using a relatively small amount of search, the performance\nicantly improving on state-of-the-art structured- is comparable or better than the state-of-the-art.\nprediction performance.\n2. Comparison to Related Work\n1. Introduction A typical approach to structured prediction is to learn a cost\nfunction C(x, y) for scoring a potential structured output yStructured prediction involves learning a predictor that can given a structured input x. Given such a cost function andproduce complex structured outputs given complex struc- a new input x, the output computation involves solving thetured inputs. As an example, consider the problem of image so-called \"Argmin\" problem which is to find the minimumscene labeling, where the structured input is an image and cost output for a given input. For example, the cost functionthe structured output is a semantic labeling of the image re- is often represented as a linear model over template featuresgions. We study a new search-based approach to structured of both x and y (Lafferty et al., 2001; Taskar et al., 2003;prediction. The approach involves first defining a combi- Tsochantaridis et al., 2004).", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 2, |
| "total_chunks": 29, |
| "char_count": 2424, |
| "word_count": 377, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "997ebb90-07c3-467f-9cb1-c5ec4a017d50", |
| "text": "Unfortunately exactly solvingnatorial search space over complete structured outputs that the Argmin problem is often intractable and efficient solu-allows for traversal of the output space. Next, given a struc- tions exist only in limited cases such as when the depenAppearing in Proceedings of the 29 th International Conference dency structure among features forms a tree. In such cases,\non Machine Learning, Edinburgh, Scotland, UK, 2012. Copyright one might simplify the features to allow for tractable infer-\n2012 by the author(s)/owner(s). ence or use heuristic optimization methods, which can be Output Space Search for Structured Prediction detrimental to prediction accuracy. In contrast, a potential The most closely related framework to ours is the Samadvantage of our search-based approach is that it is rela- pleRank framework (Wick et al., 2011), which learns a cost\ntively insensitive to the dependency structure of the features function for guiding a type of Monte-Carlo search in the\nused to define the cost function. That is, the search proce- space of complete outputs. While it shares with our work\ndure only needs to be able to evaluate the cost function at the idea of explicit search in the output space, there are\nspecific input-output pairs. Thus, we are free to increase some significant differences.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 3, |
| "total_chunks": 29, |
| "char_count": 1326, |
| "word_count": 207, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "2da68c56-9ab0-4f03-873e-4846c8b2643b", |
| "text": "The SampleRank framework\nthe complexity of the cost function without considering its is focused on Monte-Carlo search, while our approach can\nimpact on the inference complexity. Another potential ben- be instantiated for a wide range of search algorithms. This\nefit of our approach is that since the search is over complete is important since it is well understood in the search literaoutputs, our inference is inherently an anytime procedure, ture that the most appropriate type of search changes from\nmeaning that it can be stopped at any time and return the problem to problem. In addition, the SampleRank framebest output discovered so far. work is highly dependent on a hand-designed \"proposal\ndistribution\" for guiding the search or effectively definingOne approach to addressing inference complexity is the search space. Rather, we describe a generic approachcascade training (Felzenszwalb & McAllester, 2007; for constructing search spaces that is shown to be effectiveWeiss & Taskar, 2010; Weiss et al., 2010), where efficient across a variety of domains.inference is achieved by performing multiple runs of\ninference from a coarse to fine level of abstraction. Such\napproaches have shown good success, however, they place 3.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 4, |
| "total_chunks": 29, |
| "char_count": 1234, |
| "word_count": 189, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "4575d163-e298-4c0f-88b5-a7d1ead5500f", |
| "text": "Problem Setup\nsome restrictions on the form of the cost functions to A structured prediction problem specifies a space of struc-facilitate \"cascading.\" Another potential drawback of ture inputs X, a space of structured outputs Y, and a non-cascades and most other approaches is that they either negative loss function L : X × Y × Y 7→ℜ+ such thatignore the loss function of a problem (e.g. by assuming L(x, y′, y) is the loss associated with labeling a particularHamming loss) or require that the loss function be decom- input x by output y′ when the true output is y. We are pro-posable in a way that supports \"loss augmented inference\". vided with a training set of input-output pairs drawn fromOur approach is sensitive to the loss function and makes an unknown target distribution and the goal is to return aminimal assumptions about it, requiring only that we have function/predictor from structured inputs to outputs whosea blackbox that can evaluate it for any potential output. predicted outputs have low expected loss with respect to\nAn alternative framework is classifier-based structured the distribution. Since our algorithms will be learning cost\nprediction. These algorithms avoid directly solving the functions over input-output pairs we assume the availabilArgmin problem by assuming that structured outputs can ity of a feature function Φ : X × Y 7→ℜn that computes\nbe generated by making a series of discrete decisions. The an n dimensional feature vector for any pair.\napproach then attempts to learn a recurrent classifier that Output Space Search. We consider a framework for struc-given a input x is iteratively applied in order to generate the tured prediction based on state-based search in the space ofseries of decisions for producing the target output y. Sim- complete structured outputs. The states of the search spaceple training methods (e.g. (Dietterich et al., 1995)) have are pairs of inputs and outputs (x, y), representing the pos-shown good success and there are some positive theoret- sibility of predicting y as the output for x. A search spaceical guarantees (Syed & Schapire, 2010; Ross & Bagnell, over those states is specified by two functions: 1) An initial2010). However, recurrent classifiers can be prone state function I such that I(x) returns an initial search stateto error propagation (K¨a¨ari¨ainen, 2006; Ross & Bagnell, for any input x, and 2) A successor function S such that for2010). SEARN (Hal Daum´e III et al., any search state (x, y), S((x, y)) returns a set of succes-2009), SMiLe (Ross & Bagnell, 2010), and DAGGER sor states {(x, y1), . . . , (x, yk)}, noting that each successor(Ross et al., 2011), attempts to address this issue using must involve the same input x as the parent.more sophisticated training techniques and have shown\nstate-of-the-art structured-prediction results. However, all Given a cost function C, that returns a numeric cost for\nthese approaches use classifiers to produce structured out- any input-output pair (i.e. search state), we compute outputs through a single sequence of greedy decisions. Un- puts using a search procedure (e.g. greedy search or beam\nfortunately, in many problems, some decisions are difficult search) guided by the cost function. In particular, given a\nto predict by a greedy classifier, but are crucial for good input x, the search procedure starts at the initial state I(x)\nperformance. In contrast, our approach leverages recurrent and traverses the space according to some search strategy\nclassifiers to define good quality search spaces over com- that is typically sensitive to C.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 5, |
| "total_chunks": 29, |
| "char_count": 3599, |
| "word_count": 578, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "23130837-fa8b-4d60-a1b5-eef74a4c799d", |
| "text": "After a specified amount of\nplete outputs, which allows decision making by comparing time, the search halts and the best state (x, y′) according to\nmultiple complete outputs and choosing the best. C that was traversed is returned with y′ being the predicted\noutput. Output Space Search for Structured Prediction The effectiveness of our search-based approach depends on creating a classification training example for each node n\nthe quality of the search space defined by I and S, the on the solution path of a structured example with feature\nsearch strategy, and the quality of C. In the following sec- vector f(n) and label equal to the action followed by the\ntions, we describe our contributions toward defining effec- path at n. Our experiments will use recurrent classifiers\ntive search spaces and learning cost functions. trained via exact imitation, but more sophisticated methods\nsuch as SEARN could also be used.\n4. Search Spaces Over Complete Outputs\n4.2. Flipbit Search Space\nIn this section we describe two search spaces over strucThe Flipbit search space is a simple baseline space overtured outputs: 1) The Flipbit space, a simple baseline, and\ncomplete outputs that uses a given recurrent classifier h for2) The limited-discrepancy search (LDS) space, which is\nbootstrapping the search. Each search state is representedintended to improve on the baseline. We start by describby a sequence of actions in the primitive space ending ining recurrent classifiers, which are used in the definition of\na terminal node representing a complete output. The ini-both spaces.\ntial search state corresponds to the actions selected by the\nclassifier, so that I(x) is equal to (x, h(x)), where h(x) is4.1. Recurrent Classifiers the output generated by the recurrent classifier. The search\nA recurrent classifier constructs structured outputs based steps generated by the successor function can change the\non a series of discrete decisions. This is formalized for a value of one action at any sequence position of the pargiven structured-prediction problem by defining an appro- ent state. In a sequence labeling problem, this corresponds\npriate primitive search space.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 6, |
| "total_chunks": 29, |
| "char_count": 2168, |
| "word_count": 341, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "da030ff0-2ef5-4512-a1a4-527855747254", |
| "text": "It is a 5-tuple ⟨I, A, s, f, T⟩, to initializing to the recurrent classifier output and then\nwhere I is a function that maps a input x to an initial search searching over flips of individual labels. The flip-bit space\nnode, A is a finite set of actions (or operators), s is the suc- is often used by local search techniques (without the clascessor function that maps any search node and action to a sifier initialization) and is similar to the \"search space\" unsuccessor search node, f is a feature function from search derlying Gibbs Sampling.\nnodes to real-valued feature vectors, and T is the terminal\nstate predicate that maps search nodes to {1, 0} indicating 4.3. Limited-Discrepancy Search Space (LDS)\nwhether the node is a terminal or not. Each terminal node in\nNotice that the Flipbit space only uses the recurrent clas-the search space corresponds to a complete structured outsifier when initializing the search. The motivation behindput, while non-terminal nodes correspond to partial strucour LDS space is to more aggressively exploit the recurrenttured outputs. Thus, the decision process for constructing\nclassifier in order to improve the search space quality.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 7, |
| "total_chunks": 29, |
| "char_count": 1175, |
| "word_count": 191, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "baa6d704-cbcf-40c8-abc3-619214276e61", |
| "text": "LDSan output corresponds to selecting a sequence of actions\nwas originally introduced in the context of problem solv-leading from the initial node to a terminal. A recurrent clasing using heuristic search (Harvey & Ginsberg, 1995). Tosifier is a function that maps nodes of the primitive search\nput LDS in context, we will describe it in terms of using aspace to actions, where typically the mapping is in terms\nclassifier for structured prediction given a primitive searchof a feature function f(n) that returns a feature vector for\nspace. If the learned classifier is accurate, then the num-any search node. Thus, given a recurrent classifier, we can\nber of incorrect action selections will be relatively small.produce a output for x by starting at the initial node of the\nHowever, even a small number of errors can propagate andprimitive space and following its decisions until reaching a\ncause poor outputs. The key idea behind LDS is to real-terminal.\nize that if the classifier response was corrected at the small\nAs an example, for sequence labeling problems, the initial number of critical errors, then a much better output would\nstate for a given input sequence x is a node containing x be produced. LDS conducts a (shallow) search in the space\nwith no labeled elements.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 8, |
| "total_chunks": 29, |
| "char_count": 1279, |
| "word_count": 211, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "5f8c52f0-7343-4322-b14d-a87f759de707", |
| "text": "The actions correspond to the se- of possible corrections in the hope of finding a solution betlection of individual labels, and the successor function adds ter than the original.\nthe selected label in the next position. Terminal nodes corMore formally, given a classifier h and its selected actionrespond to fully labeled sequences and the feature function\nsequence of length T, a discrepancy is a pair (i, a) wherecomputes a feature vector based on the input and previously\ni ∈{1, . . . , T} is the index of a decision step and a ∈Aassigned labels.\nis an action, which generally is different from the choice\nThe most basic approach to learning a recurrent classifier of the classifier at step i. For any set of discrepancies D\nis via exact imitation. For this, we assume that for any we let h[D] be a new classifier that selects actions identitraining input-output pair (x, y) we can efficiently find an cally to h, except that it returns action a at decision step i if\naction sequence, or solution path, for producing y from x. (i, a) ∈D. Thus, the discrepancies in D can be viewed as\nThe exact imitation training approach learns a classifier by overriding the preferred choice of h at particular decisions Output Space Search for Structured Prediction steps, possibly correcting for errors, or introducing new er- the quality of the LDS space. We now relate d(h) to the\nrors.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 9, |
| "total_chunks": 29, |
| "char_count": 1379, |
| "word_count": 239, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "5fbb74b9-4ee5-41c5-88da-6f338bdc69ef", |
| "text": "For a structured input x, we will let h[D](x) denote classifier error rate.\nthe output returned by h[D] for the search space condi- For simplicity, assume that all decision sequences for thetioned on x. At one extreme, when D is empty, h[D](x) structured-prediction problem have a fixed length T andsimply corresponds to the output produced by the greedy consider a input-output pair (x, y), which has a correspond-classifier. At the other extreme, when D specifies an ac- ing sequence of actions that generate y. Given a classifiertion at each step, h[D](x) is not influenced by h at all and h, we define its exact imitation error on (x, y) to be e/Tis completely specified by the discrepancy set. In practice, where e is the number of mistakes h makes at nodes alongwhen h is reasonably accurate, we will be primarily inter- the action sequence of (x, y) . Further, given a distribu-ested in small discrepancy sets relative to the size of the tion over input-output pairs, we let ǫei(h) denote the ex-decision sequence. In particular, if the error rate of the pected exact imitation error with respect to examples drawnclassifier on individual decisions is small, then the num- from the distribution. Note that the exact imitation trainingber of corrections needed to produce a correct output will approach aims to learn a classifier that minimizes ǫei(h).be correspondingly small. The problem is that we do not Also, let ǫr(h) denote the expected recurrent error of h,know where the corrections should be made and thus LDS which is the expectation over randomly drawn (x, y) of theconducts a search over the discrepancy sets, usually from Hamming distance between the action sequence producedsmall to large sets. by h when applied to x and the true action sequence for\nSearch Space Definition. Given a recurrent classifier h, (x, y). The error ǫr(h) is the actual measure of perforwe define the corresponding limited-discrepancy search mance of h when applied to structured prediction. Recall\nspace over complete outputs as follows. Each search state that due to error propagation it is possible that ǫr(h) can\nin the space is represented as (x, D) where x is a structured be much worse than ǫei(h), by as much as a factor of T.\ninput and D is a discrepancy set. We view a state (x, D) Proposition 1 shows that d(h) is related to ǫei(h) rather\nas equivalent to the input-output state (x, h[D](x)). The than the potentially much larger ǫr(h).\ninitial state function I simply returns (x, ∅) which corre- Proposition 1.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 10, |
| "total_chunks": 29, |
| "char_count": 2519, |
| "word_count": 423, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "19ee1f63-19bc-4493-8092-492e70550b7d", |
| "text": "For any classifier h and distribution oversponds to the original output of the recurrent classifier. The structured input-outputs, d(h) = Tǫei(h).\nsuccessor function S for a state (x, D) returns the set of\nstates of the form (x, D′), where D′ is the same as D, but Proof. For any example (x, y) the depth of y in Sh is equal\nwith an additional discrepancy. In this way, a path through to the number of imitation errors made by h on (x, y). To\nthe LDS search space starts at the output generated by the see this, simply create a discrepancy set D that contains a\nrecurrent classifier and traverses a sequence of outputs that discrepancy at the position of each imitation error that cordiffer from the original by some number of discrepancies. rects the error. This set is at a depth equal to the number of\nGiven a reasonably accurate h, we expect that high-quality imitation errors and the classifier h[D] will exactly produce\noutputs will be generated at relatively shallow depths of the exact action sequence for producing y. The result folthis search space and hence will be generated quickly. lows by noting that the expected number of imitation errors\nis equal to ǫei.\n4.4. Search Space Quality\nIt is illustrative to compare this result with the Flipbit\nRecall that in our experiments we train recurrent classifiers space. Let d′(h) be the expected target depth in the Flipvia exact imitation, which is an extremely simple approach bit space of a randomly drawn (x, y). It is easy to see that\ncompared to more elaborate methods such as SEARN.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 11, |
| "total_chunks": 29, |
| "char_count": 1546, |
| "word_count": 270, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "f1d576ec-0f2b-422e-9e56-f650faca784c", |
| "text": "We d′(h) = Tǫr(h) since each search step can only correct a\nnow show the desirable property that the \"exact imitation single error and the expected number of errors of the action\naccuracy\" optimized by that approach is directly related to sequence at the initial node is Tǫr(h). Since in practice\nthe \"quality\" of the LDS search space, where quality relates and in theory ǫr(h) can be substantially larger than ǫei(h),\nthe expected amount of search needed to uncover the tar- this shows that the LDS space will often be superior to the\nget output. More formally, given a input-output pair (x, y) baseline Flipbit space in terms of the expected target depth.\nwe define the LDS target depth for an example (x, y) and Since this depth relates to the difficulty of search and learnclassifier h to be the minimum depth of a state in the LDS ing, we can then expect the LDS space to be advantageous\nspace corresponding to y. Given a distribution over input- when ǫr(h) is larger than ǫei(h). In our experiments, we\noutput pairs we let d(h) denote the expected LDS target will see that this is indeed the case.\ndepth of a classifier h. Intuitively, the depth of a state in\na search space is highly related to the amount of search\n5. Cost Function Learningtime required to uncover the node (exponentially related\nfor exhaustive search, and at least linearly related for more In this section, we describe a generic framework for cost\ngreedy search). Thus, we will use d(h) as a measure of function learning that is applicable for a wide range of Output Space Search for Structured Prediction search spaces and search strategies. This approach is mo- the states of the MDP, and the ranking decision is an activated by our observation that for a variety of structured tion. The following theorem can be proved by adapting\nprediction problems, we can uncover a high quality output the proof of (Fern et al., 2006) with minor changes, e.g., no\nif we can guide the output-space search by the loss function discounting, and two actions, and applies to stochastic as\nwith respect to the target output y∗. Since the target output well as deterministic search procedures.\nis not available at testing time, we aim to learn a cost func- Theorem 1. Let H be a finite class of ranking functions.\ntion that mimics the search behavior of the loss function on For any target ranking function h ∈H, and any set of m =\n|H|the training data. With an appropriate choice of hypothesis 1 ǫ ln δ independent runs of a rank-based search procespace of cost functions, good performance on the training dure P guided by h drawn from a target distribution over\ndata translates to good performance on the testing data. inputs, there is a 1 −δ probability that every ˆh ∈H that is\nWe now precisely define the notion of \"guiding the search\" consistent with the runs satisfies L(ˆh) ≤L(h) + 2ǫLmax,\nwith a loss function.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 12, |
| "total_chunks": 29, |
| "char_count": 2879, |
| "word_count": 505, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "be45e0ce-627a-4143-a554-9dbe41cff529", |
| "text": "If the loss function can be invoked where Lmax is the maximum possible loss of any output.\narbitrarily by the search procedure, then matching its per- Although the theoretical result assumes that the target cost\nformance would require the cost function to approximate function h is in the hypothesis space, in practice this is not\nit arbitrarily closely, which is needlessly complex in most guaranteed. To minimize the chances of not being able to\ncases. Hence, we restrict ourselves to ranking-based search find a consistent hypothesis, we will only include a smaller\ndefined as follows. set of ranking decisions that are sufficient to preserve the\nRanking-based Search. Let P be an anytime search pro- best output of the algorithm at any time step. Since these\ncedure that takes an input x ∈X, calls a cost function decisions are specific to every search procedure, we will\nC over the pairs X × Y some number of times and out- describe our approach on two specific search algorithms:\nputs a structured output ybest ∈Y. We say that P is a greedy search and best-first beam search.\nranking-based search procedure if the results of calls to C Greedy Search: In greedy search, at each search step i,\nare only used to compare the relative values for different only the best open (unexpanded) node yi and the best outpairs (x, y) and (x, y′) with a fixed tie breaker. Each such put y∗i uncovered so far as evaluated by the loss function\ncomparison with tie-breaking is called a ranking decision are remembered. At each level i, we include decisions that\nand is characterized by the tuple (x, y, y′, d), where d is a rank yi higher than all its siblings, and y∗i higher than y∗i−1.\nbinary decision that indicates y is a better output than y′ Best-first Beam Search: In best-first beam search, at any\nfor input x. When requested, it returns the best output ybest search step i, a set of b open nodes Bi and the best output\nencountered thus far as evaluated by the cost function. y∗i encountered so far are maintained, where b is the beam\nwidth. The best open node yi ∈Bi is expanded, and Bi+1\nNote that the above constraints prohibit the search pro- is computed to be the best b nodes after expansion. The\ncedure from being sensitive to the absolute values of the relevant ranking decisions ensure that all outputs in Bi are\ncost function for particular search states (x, y) pairs, and ranked higher than those in Ci\\Bi, yi is ranked higher than\nonly consider their relative values. Many typical search every output in Bi \\ yi and y∗i is ranked higher than y∗i−1.strategies such as greedy search, best-first search, and beam\nsearch satisfy this property. To further reduce the number of constraints considered by\nthe learner, we do the following for both greedy search and\nA run of a ranking-based search is a sequence beam search. Ranking constraints for exact imitation were\nx, s1, o1, . . . , sn, on, y, where x is the input to the predic- generated until reaching y∗, the correct output, and after\ntor, y is the output, and si is the internal memory state of that we only generate constraint(s) to rank y∗higher than\nthe predictor just before the ith call to the ranking function. best cost open node(s) as evaluated by the current cost funcoi is the ith ranking decision (xi, yi, y′i, di). tion and continue the search guided by the cost function. Given a hypothesis space H of cost functions, the cost function learning works as follows.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 13, |
| "total_chunks": 29, |
| "char_count": 3437, |
| "word_count": 604, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "8921cffe-956f-4263-bc2c-f56a47353d9d", |
| "text": "Summary of Overall Approach\nP on each training example (x, y∗) for a maximum time of\nTmax substituting the loss function L(x, y, y∗) for the cost Our approach consists of two main components, a recurrent\nfunction C(x, y). For each run, it records the set of all classifier and a cost function, and we train them sequenranking decisions (xi, yi, y′i, di). The set of all ranking de- tially.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 15, |
| "total_chunks": 29, |
| "char_count": 389, |
| "word_count": 70, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "3bc0fcae-c2bb-433e-af7e-5ffe13f6f1d2", |
| "text": "First, we train the recurrent classifier as described in\ncisions from all the runs is given as input to a binary classi- Section 4.1. We then use this trained classifier to define one\nfier, which finds a cost function C ∈H, consistent with the of the two search spaces over complete outputs S (either\nset of all such ranking decisions. The ranking-based search Flipbit or LDS) for every training input x (see Section 4).\ncan be viewed as a Markov Decision Process (MDP), where Second, we train the cost function to score outputs for a\nthe internal states of the search procedure correspond to given combination of search space over complete outputs\nS and a search procedure P as described in Section 5. Output Space Search for Structured Prediction At test time, we use the learned recurrent classifier and cost sequence labeling problems with the exception of chunkfunction to make predictions as follows. For each test in- ing and POS tagging, where labels of two previous tokens\nput x, we define the search space over complete outputs S were used. For scene labeling, the labels of neighborhood\nusing the recurrent classifier and execute the search proce- patches were used. In all our experiments, we train the redure P in this search space guided by the cost function for current classifier using exact imitation (see Section 4) via\na specified time bound. We return the best cost output y Perceptron for 100 iterations with learning rate 1. Predicthat is uncovered during the search as the prediction for x. tion accuracy is measured with F1 loss for the chunking\ntask and Hamming loss for all the remaining tasks.\n7. Experiments and Results In all cases, the cost function over input-output pairs is\nsecond order, meaning that it is has features over neigh-Datasets. We evaluate our approach on the following six boring label pairs and triples along with features of thestructured prediction problems (five benchmark sequence structured input. We trained the cost function, as describedlabeling problems and a 2D image labeling problem): 1) in Section 5, in an online manner via Perceptron updatesHandwriting Recognition (HW). The input is a sequence with learning rate 0.01 for 500 iterations (i.e., ranking con-of binary-segmented handwritten letters and the output is straints were generated on-the-fly in every iteration).the corresponding character sequence [a−z]+. This dataset\ncontains roughly 6600 examples divided into 10 folds Learners.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 16, |
| "total_chunks": 29, |
| "char_count": 2453, |
| "word_count": 396, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "b782eab8-c96d-4c01-af46-78e1f5da1c45", |
| "text": "We report results for several instantiations of\n(Taskar et al., 2003). We consider two different variants of our framework. First, we consider our framework using\nthis task as in (Hal Daum´e III et al., 2009), in HW-Small a greedy search procedure for both the LDS and flip-bit\nversion, we use one fold for training and remaining 9 folds spaces, denoted by LDS-Greedy and FB-Greedy. In both\nfor testing, and vice-versa in HW-Large. 2) NETtalk training and testing, the greedy search was run for a number\nStress.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 17, |
| "total_chunks": 29, |
| "char_count": 511, |
| "word_count": 86, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "fbf12cae-56ed-417f-80e3-cdca01f655b5", |
| "text": "The task is to assign one of the 5 stress labels to of steps equal to the length of the sequence. Using longer\neach letter of a word. There are 1000 training words and runs did not impact results significantly. Second, we per-\n1000 test words in the standard dataset. We use a sliding formed best-first beam search with a beam width of 100 in\nwindow of size 3 for observational features. 3) NETtalk both the LDS and flib-bit spaces, denoted LDS-BST-b100\nPhoneme. This is similar to NETtalk Stress except that the and FB-BST-b100.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 18, |
| "total_chunks": 29, |
| "char_count": 529, |
| "word_count": 94, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "63c4a3bf-be47-4a5b-82bb-c65c0f3800dc", |
| "text": "The best-first search was run for 200\ntask is to assign one of the 51 phoneme labels to each let- expansions in each case. We tried larger beam widths and\nter of the word. 4) Chunking. The goal in this task is to search steps but performance was similar. Third, to see the\nsyntactically chunk English sentences into meaningful seg- impact of adding additional search at test time to a greedily\nments. We consider the full syntactic chunking task and use trained cost function, we also used the cost function learned\nthe dataset from the CONLL 2000 shared task1, which con- by LDS-Greedy and FB-Greedy in the context of a bestsists of 8936 sentences of training data and 2012 sentences first beam search (beam width = 100) at test time in both\nof testing data. 5) POS tagging. We consider the tagging the LDS and flip-bit space, denoted by LDS-BST(greedy)\nproblem for English language, where the goal is to assign and FB-BST(greedy). We also report the performance of\nthe part-of-speech tag for each word in the sentence. The recurrent classifier (Recurrent) and the exact imitation acstandard data from Wall Street Journal (WSJ) corpus2 was curacy (1 −ǫei), which as described earlier are related to\nused in our experiments. 6) Scene labeling. This dataset the structures of the flip-bit and LDS spaces.\ncontains 700 images of outdoor scenes (Vogel & Schiele, We compare our results with other structured pre-2007). Each image is divided into patches by placing a reg- diction algorithms including CRFs (Lafferty et al.,ular grid of size 10×10 and each patch takes one of the 9 se- 2001), SVM-Struct (Tsochantaridis et al., 2004),mantic labels (sky, water, grass, trunks, foliage, field, rocks, SEARN (Hal Daum´e III et al., 2009) and CASCADESflowers, sand). Simple appearance features like color, tex- (Weiss & Taskar, 2010). For these algorithms, we reportture and position are used to represent each patch. Training the best published results whenever available. In thewas performed with 600 images and the remaining 100 im- remaining cases, we used publicly available code or ourages were used for testing. own implementation to generate those results. Ten percent\nFor all sequence labeling problems, the recurrent classi- of the training data was used to tune hyper-parameters.\nfier labels a sequence using a left-to-right ordering and for CRFs were trained using SGD3.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 19, |
| "total_chunks": 29, |
| "char_count": 2374, |
| "word_count": 386, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "90117930-e6c1-427e-852e-69166b7b211b", |
| "text": "SVMhmm was used to\nscene labeling problem with an ordering from top-left to train SVMstruct and the value of parameter C was chosen\nright-bottom in a row-wise raster form. To train the recur- from 10−4, 10−3, · · · , 103, 104 based on the validation\nrent classifier, the output label of previous token is used set. Cascades were trained using the implementation4\nas a feature to predict the label of the current token for all provided by the authors, which can be used for sequence\n1http://www.cnts.ua.ac.be/conll2000/chunking/ 3http://leon.bottou.org/projects/sgd\n2http://www.cis.upenn.edu/ treebank/ 4http://code.google.com/p/structured-cascades/ Output Space Search for Structured Prediction Prediction accuracy results of different structured prediction algorithms. ALGORITHMS DATASETS\nHW-Small HW-Large Stress Phoneme Chunk POS Scene labeling\n1 −ǫei 73.9 83.99 77.97 77.09 88.84 92.5 78.61\nRecurrent 65.67 74.87 72.82 73.58 88.51 92.15 56.64\nLDS-Greedy 83.93 92.94 79.12 80.9 94.73 96.95 74.75\nFB-Greedy 81.83 90.76 78.8 79.79 93.97 96.89 68.93\nLDS-BST(greedy) 84.14 93.23 79.35 81.04 94.74 96.95 76.91\nFB-BST(greedy) 81.83 90.76 78.8 79.83 94.05 96.89 69.25\nLDS-BST-b100 83.28 92.83 79.81 81.57 94.6 96.8 76.63\nFB-BST-b100 81.57 90.13 79.27 80.29 93.84 96.74 69.11\nCRF 80.03 86.89 78.52 78.91 94.77 96.84 -\nSVM-Struct 80.36 87.51 77.99 78.3 93.64 96.81 -\nSEARN 82.12B 90.58B 76.15 77.26 94.44B 95.83 62.31\nCASCADES 69.62 87.95 77.18 69.77 - 96.82 - labeling problems with Hamming loss. For SEARN we Talk datasets. Our results with a third order cost function\nreport the best published results with a linear classifier improved in both cases and are better than Cascades for\n(i.e., linear SVMs instead of Perceptron) as indicated by the handwriting task (86.59 for HW-Small and 95.04 for\nB in the table and otherwise ran our own implementation HW-Large).\nof SEARN with optimal approximation as described in Finally, the improvement in the scene labeling domain is(Hal Daum´e III et al., 2009) and optimized the interpola- the most significant, where SEARN achieves an accuracytion parameter β over the validation set. Note that we do of 62.31 versus 74.75 for LDS-Greedy. In this domain,not compare our results to SampleRank due to the fact that most prior work has considered the simpler task of classi-its performance is highly dependent on the hand-designed fying entire images into one of a set of discrete classes, butproposal distribution, which varies from one domain to to the best of our knowledge no one has considered a struc-another. tured prediction approach for patch classification. The only\nComparison to State-of-the-Art.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 20, |
| "total_chunks": 29, |
| "char_count": 2643, |
| "word_count": 400, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "2a6f4ac0-71a1-4631-937f-ed6328e91dcc", |
| "text": "Table 1 shows the pre- reported result for patch classification that we are aware of\ndiction accuracies of the different algorithms ('-' indicates (Vogel & Schiele, 2007) obtain an accuracy of 71.7 (verthat we were not able to generate results for those cases). sus our best performance of 76.91) with non-linear SVMs\nAcross all benchmarks we see that even the most ba- trained i.i.d. on patches using more sophisticated features\nsic instantiations of our framework, LDS-Greedy and FB- than ours. Greedy, produce results that are comparable or significantly Adding More Search. We see that LDS-BST(greedy)better than the state-of-the-art. This is particularly interest- and FB-BST(greedy) are generally the same or better thaning, since these results are achieved using a relatively small LDS-Greedy and FB-Greedy, with the biggest improve-amount of search and the simplest search method and re- ment in the challenging scene labeling domain, improvingsults tend to be the same or better for our other instanti- from 74.75 to 76.91. This shows that it can be an effec-ations.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 21, |
| "total_chunks": 29, |
| "char_count": 1075, |
| "word_count": 167, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "4d90969d-30e8-47da-a413-0724104b8cf8", |
| "text": "A likely reason that we are outperforming CRFs tive strategy to train using greedy search and then insertand SVM-Struct is that we use second-order features, while that cost function into a more elaborate search at test timethose approaches use first-order features, since exact infer- for further improvement. We see similar results for LDS-ence with higher order features is too costly, especially dur- BST-b100 and FB-BST-b100 where the cost function wasing training. As stated earlier, one of the advantages of our trained using best-first beam search. There was significantapproach is that we can use higher-order features with neg- improvement for the NET-Talk datasets and scene labelingligible overhead. compared to LDS-BST and FB-BST. This illustrates that\nTo see whether our approach can benefit from further in- the approach can effectively train using the more complex\ncreasing the feature order, we generated results for our ap- search strategy of best-first beam search. It is interesting to\nproach and Cascades using third-order features (not shown note that LDS-BST(greedy) and LDS-BST-b100 perform\nin table) for the NET-Talk and handwriting domains. Both methods use the same best-first search procades improved over the results with second-order cost cedure at test time, but differ in that one trains with greedy\nfunction for the handwriting dataset (81.87 for HW-Small search and the other with best-first search. This shows that\nand 93.76 for HW-Large), but degraded for the NET- based on these results there is not a clear advantage to trainOutput Space Search for Structured Prediction\nAnytime curve for scene labeling task\n80 state-of-the-art performance, validating the effectiveness of\nour framework. Future work includes studying robust train-\n75 ing approaches to mitigate error propagation when the cost\n70 function is non-realizable and addressing scalability issues.\naccuracy 65 Acknowledgements\nThis work was supported by NSF grants IIS 0964705, IIS 60 Prediction 0958482, IIS-1018490 and DARPA contract FA8750-09-\n55 C-0179. LDS-Greedy\nFB-Greedy\n0 10 20 30 40 50 60 70 80 90 References Wall clock time (seconds)\nFigure 1.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 22, |
| "total_chunks": 29, |
| "char_count": 2154, |
| "word_count": 327, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "0f3be1fd-298c-471d-b065-6a0e233047f6", |
| "text": "Anytime curves for scene labeling task comparing LDS- Dietterich, Thomas G., Hild, Hermann, and Bakiri, Ghulum. A\nGreedy and FB-Greedy. comparison of ID3 and backpropagation for english text-tospeech mapping. MLJ, 18(1):51–80, 1995.\ning in the context of best-first search, though there are ben- Felzenszwalb, Pedro F. and McAllester, David A. The generalized\nefits at test time.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 23, |
| "total_chunks": 29, |
| "char_count": 379, |
| "word_count": 56, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "60e628f9-f1fb-4ed3-a53d-ab4d32e24b05", |
| "text": "This is a point that deserves further inves- A* architecture. JAIR, 29:153–190, 2007.\ntigation in future work. Fern, Alan, Yoon, Sung Wook, and Givan, Robert.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 24, |
| "total_chunks": 29, |
| "char_count": 158, |
| "word_count": 25, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "43600865-1848-4191-b0d8-0e22338efb69", |
| "text": "Approximate\nLDS space vs. We see that generally the in- policy iteration with a policy language bias: Solving relational\nstances of our method that use the LDS space outperforms Markov decision processes. JAIR, 25:75–118, 2006.\nthe corresponding instances that use the Flipbit space. In- Hal Daum´e III, Langford, John, and Marcu, Daniel.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 25, |
| "total_chunks": 29, |
| "char_count": 338, |
| "word_count": 52, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "60ca4818-b54c-4abb-8e3d-d6cf3b4e177a", |
| "text": "Search-based\nterestingly, if there is a large difference between the exact structured prediction. MLJ, 75(3):297–325, 2009.\nimitation accuracy 1 −ǫei and the recurrent classifier accuHarvey, William D. and Ginsberg, Matthew L. Limited discrep-racy (e.g., Handwriting and Scene labeling), then the LDS ancy search. In IJCAI, 1995.\nspace is significantly better than the flip-bit space. This\nis particularly true in our most complex problem of scene K¨a¨ari¨ainen, M. Lower bounds for reductions. In Atomic Learning\nlabeling where this difference is quite large, as is the gap Workshop, 2006.\nbetween LDS and Flipbit. Lafferty, John, McCallum, Andrew, and Pereira, Fernando.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 26, |
| "total_chunks": 29, |
| "char_count": 672, |
| "word_count": 98, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "6c290e48-ce49-42d0-abb1-9fe19192e1fc", |
| "text": "Conditional random fields: Probabilistic models for segmenting\nFurther, we compared the anytime curves between LDS- and labeling sequence data. Greedy and FB-Greedy, which show the accuracy achieved\nby a method versus an inference time bound at prediction Ross, St´ephane and Bagnell, Drew. Efficient reductions for imitation learning. In AISTATS, 2010.time. Generally we found that LDS-Greedy was comparable or better than the FB-Greedy curve and especially so for Ross, St´ephane, Gordon, Geoffery, and Bagnell, Drew.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 27, |
| "total_chunks": 29, |
| "char_count": 519, |
| "word_count": 74, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "bbb11845-4262-4d94-b478-bc91f4c48b43", |
| "text": "A reducthe Handwriting and Scene Labeling problems. Figure 1 tion of imitation learning and structured prediction to no-regret\nonline learning. In AISTATS, 2011.shows the anytime curves for the Scene Labeling problem. We see that LDS-Greedy is dominant and improves accu- Syed, Umar and Schapire, Rob. A reduction from apprenticeship\nracy much more quickly than FB-Greedy. For example, a learning to classification.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 28, |
| "total_chunks": 29, |
| "char_count": 415, |
| "word_count": 61, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "f36e9967-e182-4da2-9d65-9995e7b5f2e8", |
| "text": "In NIPS, 2010.\n10 second time bound for LDS-Greedy achieves the same Taskar, Benjamin, Guestrin, Carlos, and Koller, Daphne. Maxaccuracy as FB-Greedy using 90 seconds. These results margin markov networks. In NIPS, 2003.\nshow the benefit of using the LDS space and empirically\nTsochantaridis, Ioannis, Hofmann, Thomas, Joachims, Thorsten,confirm our observations in Section 4 that the quality of and Altun, Yasemin. Support vector machine learning for inthe LDS and Flipbit spaces are related to the exact imita- terdependent and structured output spaces. In ICML, 2004.\ntion and recurrent errors respectively. Vogel, Julia and Schiele, Bernt. Semantic modeling of natural\nscenes for content-based image retrieval.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 29, |
| "total_chunks": 29, |
| "char_count": 714, |
| "word_count": 104, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "74192392-774c-4133-b6f8-d92f2d81b42e", |
| "text": "IJCV, 72(2):133–157,\n8. Summary and Future Work 2007. We studied a general framework for structured prediction Weiss, David and Taskar, Ben. Structured prediction cascades. In\nAISTATS, 2010.based on search in the space of complete outputs. We\nshowed how powerful classifiers can be leveraged to de- Weiss, David, Sapp, Ben, and Taskar, Ben. Sidestepping infine an effective search space over complete outputs, and tractable inference with structured ensemble cascades. In\ngave a generic cost function learning approach to score NIPS, 2010.\nthe outputs for any given combination of search space and Wick, Michael L., Rohanimanesh, Khashayar, Bellare, Kedar,\nsearch strategy. Our experimental results showed that a Culotta, Aron, and McCallum, Andrew. Samplerank: Trainvery small amount of search is needed to improve upon the ing factor graphs with atomic gradients.", |
| "paper_id": "1206.6460", |
| "title": "Output Space Search for Structured Prediction", |
| "authors": [ |
| "Janardhan Rao Doppa", |
| "Alan Fern", |
| "Prasad Tadepalli" |
| ], |
| "published_date": "2012-06-27", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1206.6460v1", |
| "chunk_index": 30, |
| "total_chunks": 29, |
| "char_count": 865, |
| "word_count": 128, |
| "chunking_strategy": "semantic" |
| } |
| ] |