Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Robust Algorithms for TSP and Steiner Tree

Robust Algorithms for TSP and Steiner Tree ARUN GANESH, UC Berkeley, USA BRUCE M. MAGGS, Duke University and Emerald Innovations, USA DEBMALYA PANIGRAHI, Duke University, USA Robust optimization is a widely studied area in operations research, where the algorithm takes as input a range of values and outputs a single solution that performs well for the entire range. Speciically, a robust algorithm aims regr toetminimize , deined as the maximum diference between the solution’s cost and that of an optimal solution in hindsight once the input has been realized. For graph problems P in , such as shortest path and minimum spanning tree, robust polynomial-time algorithms that obtain a constant approximation on regret are known. In this paper, we study robust algorithms for minimizing regret in NP-hard graph optimization problems, and give constant approximations on regret for the classical traveling salesman and Steiner tree problems. CCS Concepts: · Theory of computation→ Graph algorithms analysis. Additional Key Words and Phrases: Steiner tree, travelling salesman 1 INTRODUCTION In many graph optimization problems, the inputs are not known precisely and the algorithm is desired to perform well over a range of inputs. For instance, consider the following situations. Suppose we are planning the delivery route of a vehicle that must deliver goods � lo to cations. Due to varying traic conditions, the exact travel times between locations are not known precisely, but a range of possible travel times is available from historical data. Can we design a tour that is nearly optimal allfor travel times in the given ranges? Consider another situation where we are designing a telecommunication network to connect a set of locations. We are given cost estimates on connecting every two locations in the network but these estimates might be of due to unexpected construction problems. Can we design the network in a way that is nearly optimal all rfor ealized construction costs? These questions have led to the ieldrof obust graph algorithms. To deine a robust graph algorithm, we start with a “standardž optimization problem P deined by a set systemS ⊆ 2 over edges � with weights � . For example, ifP is the minimum spanning tree problem, S would be the set of all sets of edges comprising spanning trees. Given these inputs, the goal of the standard versionP is of to ind the set � ∈ S that minimizes � . �∈� In the robust version ofP, given a range of weights [ℓ ,� ] for every edge�, we want a solution that is good for � � all realizations of edge weights simultaneously. To quantify how good a solution is,regr weetdeine as the its maximum diference between the algorithm’s cost and the optimal cost for any vector of edge w d.eights In other We focus on minimization problems over sets of edges in this paper, but one can easily extend the deinition to maximization problems and problems over arbitrary set systems. Authors’ addresses: Arun Ganesh, arunganesh@berkeley.edu, UC Berkeley, Soda Hall, Berkeley, California, USA, 94709; Bruce M. Maggs, bmm@cs.duke.edu, Duke University and Emerald Innovations, 308 Research Drive, Durham, North Carolina, USA, 27710; Debmalya Panigrahi, debmalya@cs.duke.edu, Duke University, 308 Research Drive, Durham, North Carolina, USA, 27710. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for proit or commercial advantage and that copies bear this notice and the full citation on the irst page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). © 2022 Copyright held by the owner/author(s). 1549-6325/2022/12-ART https://doi.org/10.1145/3570957 ACM Trans. Algor. 2 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi words, the regret ofsol is: ︁ ︁ max(sol(d) − opt(d)), sol(d) := � , opt(d) := min � . � � d �∈S �∈sol �∈� Here, sol(d) (resp. opt(d)) denotes the cost ofsol (resp. the cost of the optimal solution) in instance d, and d ranges over all realizable inputs, i.e., inputs such ℓ ≤that � ≤ � for all �. We emphasize thatsol is a ixed � � � solution (independentdof ) whereas the solution determining opt(d) is dependent on the input d. Now, the goal of the robust versionP ofis to ind a solution that minimizes regret. The solution that achieves this minimum is calleminimum d the regret solution (mrs), and its regret is the minimum regret(mr). In many cases, however, minimizing regret turns out to NPb-har e d, in which case one seeks an approximation guarantee. Namely, a�-approximation algorithm satisies, for all input realizations d, sol(d) − opt(d) ≤ � · mr, i.e., sol(d) ≤ opt(d) + � · mr. To the best of our knowledge, all previous work in polynomial-time algorithms for minimizing regret in robust graph optimization focused on problems P. In in this paper, we study robust graph algorithms for minimizing regret inNP-hard optimization problems. In particular, we study robust algorithms for the classical traveling salesman (tsp) and Steiner treestt ( ) problems, that model e.g. the two scenarios described at the beginning of the paper. As a consequence of theNP-hardness, we cannot hope to show guarantees of the form:sol(d) ≤ opt(d)+ �· mr, since for ℓ = � (i.e.,mr = 0), this would imply an exact algorithm for NP-har an d optimization problem. Instead, � � we give guarantees:sol(d) ≤ �·opt(d)+�·mr, where � is (necessarily) at least as large as the best approximation guarantee for the optimization problem. We call such an algorithm (�, �)-r anobust algorithm. If both � and � are constants, we call it a constant-approximation to the robust problem. In this paper, our main results are constant approximation algorithms for the robust traveling salesman and Steiner tree problems. We hope that our work will lead to further research in the ield of robust approximation algorithms, particularly for minimizing regret in other NP-hard optimization problems in graph algorithms as well as in other domains. 1.1 Related Work Robust graph algorithms have been extensively studied in the operations research community. It is known that minimizing regrNP et is -hard for shortest path [25] and minimum cut 1][problems, and using a general theorem for converting exact algorithms to robust ones, 2-approximations are known for these problems 11, [16]. In some cases, better results are known for special classes of graphs, e17 .g.,].[Robust minimum spanning trmst ee () has also been studied, although in the context of making exponential-time exact algorithms more24practical ]. [ Moreover, robust optimization has been extensively researched for other (non-graph) problem domains in the operations research community, and has led to results in clustering 4ś 6, 19[], linear programming 14,[20], and other areas [3, 16]. More details can be found in the book by Kouvelis and 18Y]uand [ the survey by Aissi et al. [2]. Other robust variants of graph optimization where one does not know the edge costs ahead of time have also been studied in the literature. Inrobust the combinatorial optimization model proposed by Bertsimas and Sim 7],[ edge costs are given as ranges as in this paper, but instead of optimizing for all realizations of costs within the ranges, the authors consider a model where at most � edge costs can be set to their maximum value and the remaining are set to their minimum value. The objective is to minimize the maximum cost over all realizations. In this setting, there is no notion of regret and an approximation algorithm for the standard problem translates to an approximation algorithm for the robust problem with the same approximation factor. In the data-robust model [12], the input includes a polynomial number of explicitly deined “scenariosž for edge costs, with the goal of inding a solution that is approximately optimal for all given scenarios. That is, in the (1) (2) (�) input one receives a graph and a polynomial number of scenarios d , d . . . d and the goal is to ind alg (�) whose maximum cost across all scenarios is at most some approximation factor mintimes max � . sol �∈[�] �∈sol � ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 3 In contrast, in this paper, we have exponentially many scenarios and look at the maximum alg(d)of− opt(d) rather than alg(d). A variation of this is reco the verable robust model [9], where after seeing the chosen scenario, the algorithm is allowed to “recoverž by making a small set of changes to its original solution. 1.2 Problem Definition and Results We irst deine the Steiner trestt e ( ) and traveling salesman problems tsp(). In both problems, the input is an undirected graph� = (� , �) with non-negative edge costs. In Steiner tree, we are also given a subset of vertices calledterminalsand the goal is to obtain a minimum cost connected subgraph � that of spans all the terminals. In traveling salesman, the goal is to obtain a minimum cost tour that visits ever�y v.erte In the x in robust versions of these problems, the edge costs are ranges[ℓ ,� ] from which any cost may realize. � � Our main results are the following: Theorem 1.1. (Robust Approximations.) There exist constant approximation algorithms for the robust traveling salesman and Steiner tree problems. Remark: The constants we are able to obtain for the two problems are very difer(ent: 4.5, 3.75) fortsp (in Section 3) and(2755, 64) forstt (in Section 5). While we did not attempt to optimize the precise constants, obtaining small constantsstt forcomparable to thetsp result requires new ideas beyond our work and is an interesting open problem. We complement our algorithmic results with lower bounds. Note ℓthat = �if , we have mr = 0 and thus an � � (�, �)-robust algorithm gives�an -approximation for precise inputs. So, hardness of approximation results yield corresponding lower bounds on �. More interestingly, we show that hardness of approximation results also yield lower bounds on the value of � (see Section 6 for details): Theorem 1.2. (APX-hardness.) A hardness of approximation of� for tsp (resp., stt) under P ≠ NP implies that it isNP-hard to obtain� ≤ � (irrespective of�) and � ≤ � (irrespective of�) for robust tsp (resp., robust stt). 1.3 Our Techniques We now give a sketch of our techniques. Before doing so, we note that for problems P with in linear objectives, ℓ +� � � it is known that running an exact algorithm using weights gives a(1, 2)-robust solution11[, 16]. One might hope that a similar result can be obtaine NP d for -hard problems by replacing the exact algorithm with an approximation algorithm in the above framework. Unfortunately, we show in Section 3 that this is not true in ℓ +� � � general. In particular, we give a robust tsp instance where using a 2-approximation tsp for with weights gives a solution thatnot is(�, �)-robust forany � = �(�), � = �(�). More generally, a black-box approximation run on a ixed realization could output a solution including edges that have small weight opt relativ for that e to realization (so including these edges does not violate the approximation guarantee), but these edges could have large weight relativemr toand opt in other realizations, ruining the robustness guarantee. This establishes a qualitative diference between robust approximations for problems P consider in ed earlier and NP-hard problems being considered in this paper, and demonstrates the need to develop new techniques for the latter class of problems. LP relaxation. We denote the input graph� = (� , �). For each edge � ∈ �, the input is a range[ℓ ,� ] where the � � actual edge weight � can realize to any value in this range. The robust version of a graph optimization problem There are two common and equivalent assumptions made intsp theliterature in order to achieve reasonable approximations. In the irst assumption, the algorithms can visit vertices multiple times in the tour, while in the latter, the edges satisfy the metric property. We use the former in this paper. ACM Trans. Algor. 4 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi P then has the LP relaxation min{� : x ∈ S; � � ≤ opt(d) + �,∀d}, � � �∈� where � is the standard polytope forP, and opt(d) denotes the cost of an optimal solution when the edge weights ared = {� : � ∈ �}. That is, this is the standard LP for the problem, but with the additional constraint that the fractional solution x must have regret at most � for any realization of edge weights. We call the additional constraints theregret constraint set. Note that settingx to be the indicator vector of mrs and � to mr gives a feasible solution to the LP; thus, the LP optimum is at mrmost , i.e., the optimal solution to the LP gives a lower bound for the regret minimization problem. Solving the LP. We assume that the constraints in � are separable in polynomial time (e.g., this is true for most standard optimization problems including stt and tsp). So, designing the separation oracle comes down to separating the regret constraint set, which requires checking that: " # max � � − opt(d) = � � �∈� " # " # ︁ ︁ max max � � − sol(d) = max max � � − sol(d) ≤ � . � � � � sol sol d d �∈� �∈� Thus, given a fractional solution x, we need to ind an integer solution sol and a weight vectord that maximizes the regret ofx given by � � − sol(d). Once sol is ixed, inding d that maximizes the regret is simple: If �∈� � � sol does not include an edge �, then to maximize � � − sol(d), we set � = � ; else if sol includes �, we �∈� � � � � set � = ℓ . Note that in these two cases, edge� contributes� � and ℓ � − ℓ respectively to the regret. The � � � � � � � above maximization thus becomes: " # ︁ ︁ ︁ ︁ max � � + (ℓ � − ℓ ) = � � − min (� � − ℓ � + ℓ ). (1) � � � � � � � � � � � � sol sol �∈sol �∈sol �∉sol �∈� Thus, sol is exactly the optimal solution with edge w�eights := � � − ℓ � + ℓ . (For reference, we deine the � � � � � � derivedinstance of a robust graph problem as the instance with edge weights � .) Note that these weights are non-negative as� > ℓ and � ≥ 0. � � � Now, if we were solving a problem P,in we would simply need to solve the problem on the derived instance. Indeed, we will show later that this yields an alternative technique for obtaining robust algorithms for problems inP, and recover existing results in 16].[However, we cannot hope to ind an optimal solution toNP an-hard problem. Our irst compromise is that we settle for appr anoximateseparation oracle. More precisely, our goal ′ ′ ′ ′ is to show that there exists some ixed constants � , � ≥ 1 such that if � � > � · opt(d) + � · � for some � � ′ ′ d, then we can indsol, d such that � � > sol(d ) + �. Since the LP optimum is at most mr, we can then � � ′ ′ obtain an(� , � )-robust fractionalsolution using the standard ellipsoid algorithm. For tsp, we show that the above guarantee can be achieved by the classic 2-approximation based on the mst solution of the derived instance. The details appear in Section 3. stt Although also admits2a-approximation based on the mst solution, this turns out to be insuicient for the above guarantee. Instead, we use a diferent approach here. We note that the regret of the fractional solution against any ixed solution sol (i.e., the argument over which Eq. (1) maximizes) can be expressed as the following diference: ︁ ︁ ︁ ︁ (� � − ℓ � + ℓ ) − (ℓ − ℓ � ) = � − � , where � := ℓ − ℓ � . � � � � � � � � � � � � � � �∉sol �∈� �∉sol �∈� The irst term is the weight of edges in the derived instance that not in arsol e . The second term corresponds to a new stt instance with diferent edge weights � . It turns out that the overall problem now reduces to showing ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 5 the following approximation guarantees on thesestt twinstances o �( and � are constants): 1 2 ︁ ︁ ︁ ︁ (i) � ≤ � · � and (ii) � ≤ � · � . � 1 � � 2 � �∈alg �∈sol �∈alg\sol �∈sol\alg Note that guarantee (i) on the derived instance is an unusual “diference approximationž that is stronger than usual approximation guarantees. Moreover, we need these approximation bounds simultane to ouslyhold, i.e., hold for the samealg. Obtaining these dual approximation bounds simultaneously forms the most technically challenging part of our work, and is given in Section 5. Rounding the fractional solution. After applying our approximate separation oracles, we have a fractional ′ ′ solution x such that for all edge weights d, we have � � ≤ � · opt(d) + � · mr. Suppose that, ignoring the � � � regret constraint set, the LP we are using has integrality gap at�most for precise inputs. Then we can bound the diference between the cost ofmrs and �x in every realization: Since the integrality gap is �, w atemost have �· � � ≥ opt(d) for anyd. This implies that: �∈� � � mrs(d) − �· � � ≤ mrs(d) − opt(d) ≤ mr. � � �∈� Hence, the regret ofmrs with respect to�� is at mostmr. Then a natural rounding approach is to try to match this property ofmrs, i.e., search for an integer solution alg that does not cost much more than �x in any realization. Suppose we choose alg that satisies: " # alg = argmin max sol(d) − � � � . (2) � � sol �∈� ′ ′ Sincealg has minimum regret with respect�xto, alg’s regret is also at most mr. Note that �x is a(�� , �� )- ′ ′ robust solution. Hencealg , is a(�� , �� + 1)-robust solution. If we are solving a problemPin , the alg that satisies Eq. (2) is the optimal solution with wmax eights {ℓ ,� − � � (� − ℓ )�� } and thus can be found in polynomial time. So, using an integral LP formulation (i.e., integrality � � � gap of 1), we get a(1, 2)-robust algorithm overall for these problems. This exactly matches the results 16], in [ although we are using a diferent set of techniques . Of course, forNP-hard problems, inding a solution alg that satisies Eq. (2) isNP-hard as well. It turns out, however, that we can design a generic rounding algorithm that gives the following guarantee: Theorem 1.3. There exists a rounding algorithm that takes as input an(�, �)-robust fractional solution tostt (resp. tsp) and outputs a (���,���+�)-robust integral solution, wher� e and � are respectively the best approximation factor and integrality gap for (classical) stt (resp., tsp). We remark that while we stated this rounding theoremstt forand tsp here, we actually give a more general version (Theorem 2.1) in Section 2 that applies to a broader class of covering problems including set cover, survivable network design, etc. and might be useful in future research in this domain. 1.4 Roadmap We present the general rounding algorithm for robust problems in Section 2. In Section 3, we use this rounding algorithm to give a robust algorithm for the Traveling Salesman problem. Section 4 gives a local search algorithm for the Steiner Tree problem. Both the local search algorithm and the rounding algorithm from Section 2 are 3 ′ They obtain(1, 2)-robust algorithms by choosing alg as the optimal solution with edge weights ℓ +� . For any d, considerd with weights � � ′ ′ ′ ′ ′ � = � +ℓ −� . By optimalityalg of , alg(d)+alg(d ) ≤ mrs(d)+mrs(d ). Rearranging, we getalg(d)−mrs(d) ≤ mrs(d )−alg(d ) ≤ � � � mr, so alg’s cost exceeds mrs’ regret by at most mr in every realization. ACM Trans. Algor. 6 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi then used to give a robust algorithm for the Steiner Tree problem in Section 5. The hardness results for robust problems appear in Section 6. Finally, we conclude with some interesting directions of future work in Section 7. 2 A GENERAL ROUNDING ALGORITHM FOR ROBUST PROBLEMS In this section we give the rounding algorithm of Theorem 1.3, which is a corollary of the following, more general theorem: Theorem 2.1. Let P be an optimization problem deined on a set systemS ⊆ 2 that seeks to ind the set � ∈ S that minimizes � , i.e., the sum of the weights of elements in �. In the robust version of this optimization �∈� problem, we have � ∈ [ℓ ,� ] for all � ∈ �. � � � Consider an LP formulation ofP (called P-LP) given by:{min � � : x ∈ �, x ∈ [0, 1] }, where � is a � � �∈� polytope containing the indicator vector � of all � ∈ S and not containing� for any � ∉ S. The corresponding � � LP formulation for the robust version (calledP -LP) is given by:{min� : x ∈ �, x ∈ [0, 1] , � � ≤ robust � � �∈� opt(d) + �∀d}. Now, suppose we have the following properties: • There is a�-approximation algorithm forP. • The integrality gap ofP-LP is at most�. ∗ ∗ • There is a feasible solution x to P-LP that satisies:∀d : � � ≤ � · opt(d) + � · mr. �∈� � Then, there exists an algorithm that outputs a(���,���+ �)-robust sol for P. Proof. The algorithm is as follows: Construct an instance P which of uses the same set systemS and where ∗ ∗ ∗ element� has weightmax{� (1− �� ), ℓ (1− �� )}+ �ℓ � . Then, use the �-approximation algorithmPfor on � � � � � � this instance to ind an integral solution �, and output it. Given a feasible solution � to P, note that: ︁ ︁ ︁ ︁ ∗ ∗ ∗ ∗ max[ � − � � � ] = max{� (1− �� ), ℓ (1− �� )}− �ℓ � � � � � � � � � � �∈� �∈� �∈� �∉� ︁ ︁ ∗ ∗ ∗ ∗ = [max{� (1− �� ), ℓ (1− �� )}+ �ℓ � ] − �ℓ � . � � � � � � � � �∈� �∈� Now, note that since� was output by a �-approximation algorithm, for any feasible solution � : ︁ ︁ ∗ ∗ ∗ ∗ ∗ ∗ [max{� (1− �� ), ℓ (1− �� )}+ �ℓ � ] ≤ � [max{� (1− �� ), ℓ (1− �� )}+ �ℓ � ] =⇒ � � � � � � � � � � � � �∈� �∈� ︁ ︁ ∗ ∗ ∗ ∗ [max{� (1− �� ), ℓ (1− �� )}+ �ℓ � ] − � �ℓ � � � � � � � � � �∈� �∈� ︁ ︁ ∗ ∗ ∗ ∗ ≤�[ [max{� (1− �� ), ℓ (1− �� )}+ �ℓ � ] − �ℓ � ] � � � � � � � � �∈� �∈� ︁ ︁ =� max[ � − � � � ]. � � �∈� �∈� SinceP-LP has integrality gap �, for any fractional solution x,∀d : opt(d) ≤ � � � . Fixing � to be the �∈� � � set of elements used in the minimum regret solution then gives: ︁ ︁ max[ � − � � � ] ≤ max[mrs(d) − opt(d)] = mr. � � d d �∈� �∈� ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 7 Combined with the previous inequality, this gives: ︁ ︁ ∗ ∗ ∗ ∗ [max{� (1− �� ), ℓ (1− �� )}+ �ℓ � ] − � �ℓ � ≤ �mr � � � � � � � � �∈� �∈� ︁ ︁ ︁ ∗ ∗ ∗ ∗ ∗ =⇒ [max{� (1− �� ), ℓ (1− �� )}+ �ℓ � ] − �ℓ � ≤ �mr+ (� − 1) �ℓ � � � � � � � � � � � �∈� �∈� �∈� ︁ ︁ ︁ ∗ ∗ =⇒ max[ � − � � � ] ≤ �mr+ (� − 1) �ℓ � . � � � � � �∈� �∈� �∈� This implies: ︁ ︁ ︁ ∗ ∗ ∀d : sol(d) = � ≤ � � � + �mr+ (� − 1) �ℓ � � � � � � �∈� �∈� �∈� ︁ ︁ ∗ ∗ ≤ � � � + �mr+ (� − 1) �� � � � � � �∈� �∈� = �� � � + �mr �∈� ≤ ��[�opt(d) + �mr] + �mr = ���· opt(d) + (���+ �) · mr. i.e.,sol is(���,���+ �)-robust as desired. □ 3 ALGORITHM FOR THE ROBUST TRAVELING SALESMAN PROBLEM In this section, we give a robust algorithm for the traveling salesman problem: Theorem 3.1. There exists a(4.5, 3.75)-robust algorithm for the traveling salesman problem. Recall that we consider the version of the problem where we are allowed to use edges multipletsp times . We in recall that anytsp tour must contain a spanning tree, and an Eulerian walk on a double mst dis a 2-approximation algorithm for tsp (known as the “double-tree algorithmž). One might hope that since we hav(1e,a2)-robust algorithm for robust mst, one could take its output and apply the double-tree algorithm to get (2, 4a)-robust solution to robust TSP. Unfortunately, we show in Section 3.1 that this algorithm (�, is �)-rnot obust for any � = �(�), � = �(�). Nevertheless, we are able to leverage the connectionmst to to arrive at a(4.5, 3.75)-robust algorithm for tsp, given in Section 3.3. 3.1 Failure of Double-Tree Algorithm The black-box reduction of 16[] for turning exact algorithms into (1, 2)-robust algorithms simply uses the exact ℓ +� � � algorithm to ind the optimal solution when � arall e set to and outputs this solution (se16 e ][ for details on its analysis). We give an example of a robust tsp instance where applying the double-tree algorithm to the (1, 2)-robust mst generated by this algorithm does not give a robust tsp solution. Since the doubling of ℓ +� � � thismst is a 2-approximationtsp forwhen all� are set to , this example will also show that using an approximation algorithm instead of an exact algorithm in the black-box reduction fails to give any reasonable robustness guarantee as stated in Section 1. ′ ′ Consider an instance of robust tsp with vertices � = {� , � . . . � }, where there is a “type-1ž edge from� to � 1 � � with length 1− � for some� > , and where there is a “type-2ž edge from� to � for all �, as well as from � �+1 2(�−1) � to � , with length in the range [0, 2− ]. � 1 �−1 ACM Trans. Algor. 8 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi Minimize � subject to ∀∅ ≠ � ⊂ � : � ≥ 2 �∈�,�∈� \� �� ∀� ∈ � : � = 2 �≠� �� ∀∅ ≠ � ⊂ � ,� ∈ �, �∈ �\� : � ≥ � �∈�(�) �,�,� �� ∀d : � � ≤ opt(d) + � (3) �∈� � � ∀�, � ∈ � ,� ≠ � : 0 ≤ � ≤ 1 �� ∀� ∈ �,�, � ∈ � , � ≠ � : 0 ≤ � ≤ 1 �,�,� ∀� ∈ � : � ≤ 2 Fig. 1. The Robust TSP Polytope ′ 4 Considermrs, which uses� − 1 type-2 edges and two type-1 edges to connect � to the rest of the tour. Its regret is maximized in the realization where all the type-2 edges it is using 2− have length and the type-2 �−1 edge it is not using has length 0. Note that if a solution contains a type-2 edge2of− length , we can replace it �−1 with the two type-1 edges it is adjacent to and the cost of the solution decreases since�w>e set . In turn, 2(�−1) the optimum solution for this realization uses the type 2-edge with length 0, the two type-1 edges adjacent to this type-2 edge once, and then the other �−2 type-1 edges twice. Somrs has cost (�−1)(2− )+2(1−�) ≤ 2(�−1) �−1 whereas opt has cost 2(� − 1)(1− �). Then, the regret of this solution is at �� most . ℓ +� � � 1 When all edge costs are set to , since� > the minimum spanning tree of the graph is a star centered 2 2(�−1) at � , i.e., all the length 1− � edges. So the (1, 2)-approximation algorithm outputs this tremst e for . Doubling this tree gives a solution to the robust tsp instance that costs2�(1− �) in all realizations of demands. Consider the realization d where all type-2 edges have length 0. mrs costs 2−2� and is also the optimal solution. If the double-tree solution(�is , �)-robust we get that: 2�(1− �) ≤ � · opt(d) + � · mr ≤ � · (2− 2�) + ���. Setting� to e.g. 1/� gives that one of �, � isΩ(�). 3.2 LP Relaxation We use the LP relaxation of robust traveling salesman in Figure 1. This is the standard subtour LP22(se ]),e e.g. [ but augmented with variables specifying the edges used to visit each new vertex, as well as with the regret constraint set. Integrally � , is 1 if splitting the tour into subpaths at each point where a vertex is visited for the �� irst time, there is a subpath fr�om to � (or vice-versa). That is, � is 1 if between the irst time � is visited and �� the irst time � is visited, the tour only goes through vertices that were already visited befor�e.visiting � is 1 �,�,� if on this subpath, the edge � is used. We use� to denote � for brevity. We discuss in this subsection � �,�,� �,�∈� why the constraints other than the regret constraint set (3) inare identical to the standar tsp d polytope. This discussion may be skipped without afecting the readability of the rest of the paper. The standard LP fortsp is thesubtour LP (see e.g. [22]), which is as follows: min � � �� �� (�,�)∈� s.t. ∀∅ ≠ � ⊂ � : � ≥ 2 �� (�,�)∈�(�) (4) ∀� ∈ � : � = 2 �� (�,�)∈� ∀(�, �) ∈ � : 0 ≤ � ≤ 1 �� We do not prove this is mrs: Even if it is not, it suices to upper bmr ound by this solution’s regret. ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 9 where �(�) denotes the set of edges with one endpoint�in . Note that because the graph is undirected, the order of� and � in terms such as(�, �), � , and � is immaterial, e.g., there is no distinction betw(e�en , �)eand dge �� �� edge (�,�) and � and � denote the same variable. This LP is written for the problem formulation where the �� �� triangle inequality holds, and thus we only need to consider tours that are cycles that visit every vertex exactly once. We are concerned, however, with the formulation where the triangle inequality does not necessarily hold, but tours can revisit vertices and edges multiple times. To modify the subtour LP to account for this formulation, we instead let � be an indicator variable for whether our solution conne � tocts � using some path in the graph. �� Using this deinition � for , the subtour LP constraints then tell us that we must buy a set of paths such that a �� set of edges directly connecting the endpoints of the paths would form a cycle visiting every vertex exactly once. Then, we introduce variables � denoting that we are using the edge � on the path from� to �. For ease of �,�,� notation, we let � = � denote the number of times a fractional solution uses the� ein dgepaths. We � �,�,� �,�∈� can then use standard constraints from the canonical shortest path LP to ensure that in an integer solution � is �� set to 1 only if for some path fr�om to �, all edges� on the path have � set to 1. �,�,� Lastly, note that the optimal tour does not use an edge more than twice. Suppose a tour uses the e� dge = (�, �) thrice. By ixing a start/end vertex for the tour, we can split the tour�,into � , �, � , �, � where � is the part of 1 2 3 1 the tour between the irst and second use of�, � is the part of the tour between the second and third use�of , and � is the part of the tour after the third use�.of Because the tour starts and ends at the same vertex (� or �), and each of the three uses of edge� goes from� to � or vice versa, the number of � , � , and � that go from� 1 2 3 to � or vice-versa (as opposed to going from � to � or � to �) must be odd, and hence not zero. Without loss of generality, we can assume � goes from� to �. Then, the tour � , � , �, � , where � denotes the reversal of� , is a 1 1 2 3 1 1 valid tour and costs strictly less than the original tour. So any tour using an edge more than twice is not optimal. This lets us add the constraint � ≤ 2 to the LP without afecting the optimal solution. This gives the formulation tsp for without triangle inequality but with repeated edges allowed: min � � �∈� � � s.t. ∀∅ ≠ � ⊂ � : � ≥ 2 �∈�,�∈� \� �� ∀� ∈ � : � = 2 �≠� �� ∀∅ ≠ � ⊂ � ,� ∈ �, �∈ �\� : � ≥ � (5) �∈�(�) �,�,� �� ∀�, � ∈ � , � ≠ � : 0 ≤ � ≤ 1 �� ∀� ∈ �,�, � ∈ � , � ≠ � : 0 ≤ � ≤ 1 �,�,� ∀� ∈ � : � ≤ 2 By integrality of the shortest path polytope, if �we let denote the length of the shortest path from � to �, �� Í Í then � � ≥ � � . In particular, if we ix the value � of the optimal setting�of values � �,�,� �� �� �� �,�,� �∈�,�,� ∈� �,�∈� is to set� to � for every� on the shortest path from� to �. So (5) without the triangle inequality assumption �,�,� �� is equivalent(4) to with the triangle inequality assumption. In particular, the integrality (5) is gap the same of as the integrality gap(4) of, which is known to be at most 3/2 23[]. Then, adding a variable � for the fractional solution’s regret and the regret constraint set gives (3). 3.3 Approximate Separation Oracle We now describe the separation oracle RRTSP-Oracle used to separate (3). All constraints except the regret constraint set can be separated in polynomial time by solving a min-cut problem. Recall that exactly separating the regret constraint set involves inding an “adversar solyžthat maximizes max [ � � − sol(d)], and d �∈� � � seeing if this quantity exce �. eHo dswever, since TSP is NP-hard, inding this solution in general NP-har isd. Instead, we will only consider a solution sol if it is a walk on some spanning � ,trand ee ind the one that maximizes max [ � � − sol(d)]. d �∈� � � ACM Trans. Algor. 10 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi Algorithm 1 RRTSP-Oracle(� (� , �), {(ℓ ,� )} , (x, y, �)) � � �∈� Input:Undirected graph � (� , �), lower and upper bounds on edge lengths{(ℓ ,� )} , solutionx( = � � �∈� {� } , y = {� } , �) to (3) �,�,� �∈�,�,� ∈� �� �,�∈� 1: Check all constraints of (3), return any violated constraint that is found 2: � ← copy of� where � has weight� � − (ℓ � − 2ℓ ) � � � � � ′ ′ 3: � ← minimum spanning tree� of Í Í 4: if ′(ℓ � − 2ℓ ) + ′ � � > � then � � � � � �∈� �∉� Í Í 5: return (ℓ � − 2ℓ ) + � � ≤ � ′ ′ � � � � � �∈� �∉� 6: else 7: return “Feasiblež 8: end if Fig. 2. Separation Oracle for (3). Fix anysol that is a walk on some spanning tr�e.eFor any �, if � is not in � , the regret ofx, y againstsol is maximized by setting �’s length to� . If� is in � , then sol is paying 2� for that edge whereas the fractional � � solution pays � � ≤ 2� , so to maximize the fractional solution’s�regr should et, be set toℓ . This gives that � � � � � the regret of fractional solution x against anysol that is a spanning tree walk�onis ︁ ︁ ︁ ︁ (ℓ � − 2ℓ ) + � � = � � − (� � − (ℓ � − 2ℓ )). � � � � � � � � � � � � �∈� �∉� �∈� �∈� The quantity � � is ixed with respect�to , so inding the spanning tr�eethat maximizes this quantity � � �∈� is equivalent to inding � that minimizes (� � − (ℓ � − 2ℓ )). But this is just an instance of the minimum � � � � � �∈� spanning tree problem where edge � has weight� � − (ℓ � − 2ℓ ), and thus we can ind� in polynomial time. � � � � � This gives the following lemma: Lemma 3.2. For any instance of robust traveling salesman there exists an algorithm RRTSP-Oracle that given a solution(x, y, �) to (3) either: • Outputs a separating hyperplane for(3), or • Outputs łFeasible,ž in which case(x, y) is feasible for the (non-robust) TSP LP and∀d : � � ≤ 2 · � � �∈� opt(d) + �. Proof of Lemma 3.2. RRTSP-Oracle is given in Figure 2. All inequalities except the regret constraint set can ′ ′ be checked exactly byRRTSP-Oracle. Consider the tree� computed inRRTSP-Oracle and d with� = ℓ for ′ ′ ′ � ∈ � and � = � for� ∉ � . The only other violated inequality RRTSP-Oracle can output is the inequality Í Í Í Í Í ′ ′ ′ ′(ℓ � − 2ℓ ) + ′ � � ≤ � in line 5, which is equivalent to � � ≤ 2 � + �. Since2 � is � � � � � � �∈� �∉� �∈� � �∈� � �∈� � the cost of a tour in realization d (the tour that follows a DFS on the spanning tr�e),e this inequality is implied by the inequality � � ≤ opt(d ) + � from the regret constraint set. FurthermoreRRTSP , -Oracle only �∈� � outputs this inequality when it is actually violated. So, it suices to show that if there exists d such that � � > 2opt(d) + � then RRTSP-Oracle outputs � � �∈� a violated inequality on line 5.opt Since (d) visits all vertices, it contains some spanning � , such treethat opt(d) ≥ � . Combining these inequalities gives �∈� ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 11 ︁ ︁ � � > 2 � + � . � � � �∈� �∈� Í Í Since all� are at most 1, setting� = ℓ for� ∈ � and � = � otherwise can only increase � � −2 � , � � � � � � � � �∈� �∈� so ︁ ︁ ︁ ︁ ︁ ℓ � + � � > 2 ℓ + � =⇒ � � − (� � − (ℓ � − 2ℓ )) > � . � � � � � � � � � � � � �∈� �∉� �∈� �∈� �∈� ′ ′ ′ Then RRTSP-Oracle inds a minimum spanning�treon e � , i.e�. such that ︁ ︁ (� � − (ℓ � − 2ℓ )) ≤ (� � − (ℓ � − 2ℓ )). � � � � � � � � � � �∈� �∈� which combined with the previous inequality gives ︁ ︁ ︁ ︁ � � − (� � − (ℓ � − 2ℓ )) > � =⇒ (ℓ � − 2ℓ ) + � � > � . � � � � � � � � � � � � ′ ′ ′ �∈� �∈� �∈� �∉� By using the ellipsoid method with separationRRTSP oracle -Oracle and the fact that(3) has optimum at most mr, we get a (2, 1)-robust fractional solution. Applying Theorem 1.3 as well as the fact that the TSP polytope has integrality gap 3/2 (see e.g. [22]) and the TSP problem has a3/2-approximation gives Theorem 3.1. 4 A LOCAL SEARCH ALGORITHM FOR STEINER TREE In this section, we describe a local search algorithm for the Steiner tree, giv 13]. By en in a simpliie [ d versions of the analysis appearing 13 in ], w [ e show that the algorithm is 4-approximate. As with many local search algorithms, this algorithm could run in superpolynomial time in the worst case. Standard tricks can be used to modify this algorithm into a polynomial (4+ time �)-approximation. This algorithm will serve as a primitive in the algorithms we design in Section 5. The local moves considered by the algorithm arpath e allswaps, deined follows: If the current Steiner tree is � , the algorithm can pick any two vertices �, � in� such that there exists a path from � to � where all vertices except � and � on the path are not in� (and thus all edges on the path are not in � ). The algorithm may take any path � from� to � of this form. It suices to consider only the shortest path of this form. The � is pathadded to � , inducing a cycle. The algorithm then picks a subpath � in the cycle and removes it fr�om , while maintaining that � is a feasible Steiner tree. It suices to consider only maximal subpaths. These are just the subpaths formed by splitting the cycle at every vertex with degree at least � 3∪in {�}. Let � denote the number of nodes and� the number of edges in the graph. Since there are at most pairs of vertices �, �, and a shortest path � between � and � can be found in � (� + � log�) time, and all maximal subpaths in the induced cy � cle ∪{�in} can be found in � (�) time, if there is a move that improves the cost � , wof e can ind it in polynomial time. We will use the following lemma to show the approximation ratio. Lemma 4.1. For any tree the fraction of vertices with degree at most 2 is strictly greater than 1/2. Proof. This follows from a Markov bound on the random variable � deined as the degree of a uniformly 2�−2 random vertex minus one. A tree with � vertices has average degree < 2, so E[�] < 1. In turn, the fraction of vertices with degree 3 or greater is[� Pr≥ 2] < 1/2. □ Theorem 4.2. Let � be a solution to an instance of Steiner tree such that no path-swap reduces the cost of � . Then � is a 4-approximation. ACM Trans. Algor. 12 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi Proof. Consider any other solution � to the Steiner tree instance. We partition the edges � into of the subpaths such that these subpaths’ endpoints are (i) vertices with degree 3 or larger � , or in (ii) vertices�in and � (which might also have degree 3 or larger �in ). Besides the endpoints, all other vertices in each subpath have degree 2 and are in� but not in�. Note that a vertex may appear as the endpoint of more than one subpath. Note also that the set of vertices in � includes the terminals, which, without loss of generality includes all � . This leaves in along with condition (i) for endpoints ensures the partition into subpaths is well-deined, i.e., if a subpath ends at a leaf of � , that leaf is �in . We also decompose� into subpaths, but some edges may be contained in two of these subpaths. To decompose � into subpaths, we irst partition the edges � into of maximal connected subgraphs�of whose leaves are vertices in� (including terminals) and whose internal vertices ar�e.not Note in that some vertices may appear in more than one subgraph, e.g., an internal vertex in � that is in � becomes a leaf in multiple subgraphs. Since these subgraphs are not necessarily paths, we next take any DFS walk on each of these subgraphs starting from one of their leaves (that is, one of the vertices � ). W in e take the route traversed by the DFS walk, and split it into subpaths at every point where the DFS walk reaches a leaf. This now gives a set of subpaths � such inthat each subpaths’ endpoints are vertices �in , no other vertices on a subpath are in � , and no edge appears in more than two subpaths. Let A and F denote the set of subpaths we decomposed� and � into, respectfully. For � ∈ A let� (�) ⊆ F be the set of subpaths� inF such that � \ �∪ � is a feasible Steiner tree�, can i.e.be swapped for�, and let� (�) = ∪ � (�). We will show that for any � ⊆ A, |� (�)| ≥ |� |. By an extension of �∈� Hall’s Theorem (Fact 15 in [13]) this implies the existence of a weight � function : A ×F → R such that: (1) �(�, �) > 0 only if � can be swapped for� (2) For all subpaths� ∈ A, �(�, �) = 1. �∈� (�) (3) For all subpaths� ∈ F , −1 �(�, �) ≤ 2. �∈� (�) This weight function then gives: ︁ ︁ ︁ ︁ ︁ ︁ �(�) = �(�) = �(�)�(�, �) ≤ �(�)�(�, �) ≤ 2�(�) ≤ 4�(�), −1 �∈A �∈A �∈� (�) �∈F �∈� (�) �∈F −1 where � (�) = {� ∈ A : � ∈ � (�)}. The irst inequality holds by the assumption in the lemma statement that no swaps reduce the cost of� , so for� ∈ A and � ∈ � (�), �(�) ≤ �(�). The last inequality follows by the fact that every edge in� appears in at most two subpaths inF . We now turn towards proving that for any � ⊆ A, |� (�)| ≥ |� |. Fix any� ⊆ A. Suppose that we remove all of the edges on paths � ∈ � from� , and also remove all vertices on these paths except their endpoints. After removing these nodes and edges, we are left with |� | + 1 connected components. Let � be a tree with |� | + 1 vertices, one for each of these connected components, with an edge between any pair of vertices � in whose corresponding components are connected by a subpath� ∈ � . Consider any vertices �ofwith degree at most 2. We claim the corresponding component contains a verte�x.in Let � denote the set of vertices in the corresponding component that are endpoints of subpathsA in . There must be at least one such vertex in � . Furthermore, it is not possible that all of the vertices � arein internal vertices�of with degree at least 3, since at most two subpaths leave this component and there are no cycles �in . The only other option for endpoints is vertices in �, so this component must contain some vertex �in . Applying Lemma 4.1, strictly more than (|� | + 1)/2 (i.e., at least|� |/2+ 1) of the components have degree at most 2, and by the previous argument contain a vertex �in . These vertices are connected by�, and since each subpath inF does not have internal vertices�in , no subpath inF passes through more than two of these components. Hence, at least|� |/2 of the subpaths inF have endpoints in two diferent components because at least|� |/2 edges are required to connect|� |/2+ 1 vertices. In turn, any of these|� |/2 paths � could be swapped ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 13 Minimize � subject to ∀� ⊂ � such that ∅ ⊂ �∩ � ⊂ � : � ≥ 1 (6) �∈�(�) ∀d such that � ∈ [ℓ ,� ] : � � ≤ opt(d) + � (7) � � � � � �∈� ∀� ∈ � : � ∈ [0, 1] (8) Fig. 3. The Robust Steiner Tree Polytope for one of the subpaths� ∈ � that is on the path between the components containing �’s endpoints. This shows that |� (�)| ≥ |� |/2 as desired. □ 5 ALGORITHM FOR THE ROBUST STEINER TREE PROBLEM In this section, our goal is to ind a fractional solution to the LP in Fig. 3 for robust Steiner tree. By Theorem 1.3 and known approximation/integrality gap results for Steiner Tree, this will give the following theorem: Theorem 5.1. There exists a(2755, 64)-robust algorithm for the Steiner tree problem. It is well-known that the standard Steiner tree polytope admits an exact separation oracle (by solving the �, �-min-cut problem using edge weights � for all �, �∈ � ) so it is suicient to ind an approximate separation oracle for the regret constraint set. So, we focus on this section in deriving an approximate separation oracle. Doing so is the most technically diicult part of the paper, so we break the section up into multiple parts as follows: In Section 5.1, we start with a simpler case wher ℓ =e 0 for all edges, and show how the local search algorithm of the previous section can help design the separation oracle in this case. In Section 5.2, we state our main technical lemma (Lemma 5.5), give a high-level overview of its proof, and show how it implies Theorem 5.1. In Section 5.3, we give the algorithm of Lemma 5.5. In Section 5.4, we analyze this algorithm’s guarantees, deferring some proofs that are highly technical and already covered at a high level in Section 5.2. In Section 5.5, we give the deferred proofs. We believe the high-level overview in Section 5.2 captures the main ideas of Sections 5.3, 5.4, and 5.5, and thus a reader who just wants to understand the algorithm and analysis at a high level can stop reading after Section 5.2. A reader who wants a deeper understanding of the algorithm’s design choices and implementation but is not too concerned with the details of the analysis can stop reading after Section 5.3. A reader who wants a deeper understanding of the analysis can stop reading after Section 5.4 and still have a strong understanding of the analysis. 5.1 Special Case where the Lower Bounds on All Edge Lengths Are ℓ = 0 In this section, we give a simple algorithm/analysis for the special ℓ case = 0 for when all edges. First, we create the derived instance of the Steiner tree problem which is � aof cop the y input graph � with edge weights� � + ℓ − ℓ � . As noted earlier, the optimal Steiner�treon e the derived instance maximizes the � � � � � regret of the fractional solution x. However, since Steiner treeNP is-hard, we cannot hope to exactly ind� . We need a Steiner tree� such that the regret caused by it can be bounded against that caused by � . The diiculty is that the regret corresponds to the total weight of edges not in the Steiner tree (plus an ofset that we will address later), whereas standard Steiner tree approximations give guarantees in terms of the total weight of edges in the Steiner tree. We overcome this diiculty by requiring a stricter notion of “diference approximationž ś that the ∗ ∗ ˆ ˆ weight of edges � \ � be bounded against those in � \ � . Note that this condition ensures that not only is the ˆ ˆ weight of edges in � bounded against those in � , but also that the weight of edges not in� is bounded against that of edgesnot in� . We show the following lemma to obtain the diference approximation: ACM Trans. Algor. 14 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi Lemma 5.2. For any � > 0, there exists a polynomial-time algorithm for the Steiner tree problem such thatopt if denotes the set of edges in the optimal solution and �(�) denotes the total weight of edges in�, then for any input instance of Steiner tree, the output solution alg satisies�(alg\ opt) ≤ (4+ �) · �(opt\ alg). Proof. The algorithm we use is the local search algorithm described in Section 4, which alg such inds that �(alg) ≤ 4· �(opt). Suppose that the cost of each edge� ∈ alg∩ opt is now changed from its initial value to 0. After this changealg , remains locally optimal because for every feasible �solution that can be reached by making a local move from alg, the amount by which the cost of alg has decreased by setting edge costs to zero is at least the amount by which � has decreased. Hence no local move causes a decrease in cost. Thus, alg remains a 4-approximation, which implies �(alg that\ opt) ≤ 4· �(opt\ alg). We also need to show that the algorithm converges in polynomially many iterations. The authors 13] in [ achieve this convergence by discretizing all the edge costs to the nearest multiple �(apxof ) for an initial �� solution apx such that �(opt) ≤ �(apx) ≤ ��(opt) (e.g., a simple way to do so is to start with a solution formed by the union of shortest paths between terminals, and then remove edges which cause cycles arbitrarily. This solution has cost between �(opt) and � · �(opt). See Section B.3 of13[] for more details). This guarantees that �� the algorithm converges in iterations, at an additiv �· �e(opt) cost. For a standard approximation algorithm this is not an issue, but for an algorithm that aims for a guarantee of�the(alg form \ opt) ≤ � (1)· �(opt\ alg) an additiv�e· �(opt) might be too much. We modify the algorithm as follows to ensure that it converges in polynomially many iterations: We only consider swapping out � for� if the decrease in cost is at least �/4 times the cost of �, and we always choose the swap of this kind that decreases the cost by the largest amount . We now show the algorithm converges. Later in the section, we will prove two claims, so for brevity’s sake we will not include the proofs of the claims here. The irst claim is that �(as alg long \ opt as) > (4+ �)�(opt\ alg), there is a swap betweenalg and opt where decrease in cost is at least �/4 times the cost of the path being swapped out, and is at least �/4� times�(alg\ opt) (the proof follows similarly to Lemma 5.10 in Section 5.3). The second claim is that in any swap the quantity �(alg\ opt) − �(opt\ alg) decreases by the same amount that �(alg) does (see Lemma 5.12 in Section 5.4). So, we use �(alg\ opt) − �(opt\ alg) as a potential to bound the number of swaps. This potential is initially at most � max � , is always at least min � as long as�(alg\ opt) > �(opt\ alg), and each swap decreases � � � � it multiplicatively by at least a factor (1− �/of 4� ) as long as�(alg\ opt) > (4+ �)�(opt\ alg). Thus the log(� max � /min� ) � � � � algorithm only needs to make swaps to arrive at a solution that is(4+a �)-approximation, − log(1−�/4� ) which is a polynomial in the input size. □ Recall that the regret caused by� is not exactly the weight of edges not� in , but includes a ixed ofset of (ℓ − ℓ � ). Ifℓ = 0 for all edges, i.e., the ofset is 0, then we can recover a robust algorithm from Lemma 5.2 � � � � �∈� alone with much better constants than in Theorem 5.1: Lemma 5.3. For any instance of robust Steiner tree for which all ℓ = 0, for every � > 0 there exists an algorithm RRST-Oracle-ZLB which, given a solution(x, �) to the LP in Fig. 3, either: • Outputs a separating hyperplane for the LP in Fig. 3, or • Outputs łFeasible,ž in which case x is feasible for the (non-robust) Steiner tree LP and∀d : � � ≤ � � �∈� opt(d) + (4+ �)�. �(�)−�(�) Note that �(�) − �(�) and are both maximized by maximizing �(�) and minimizing �(�). Any path � from� to � that we �(�) consider adding is independent of the �paths we can consider removing, since � by deinition does not intersect with our solution. So in inding a swap satisfying these conditions if one exists, it still suices to only consider swaps between � and shortest longest paths paths� in the resulting cycles as before. ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 15 Algorithm 2 RRST-Oracle-ZLB(� (� , �), {� } , (x, �)) � �∈� Input: Undirected graph � (� , �), upper bounds on edge lengths{� } , solutionx(= {� } , �) to the LP � �∈� � �∈� in Fig. 3 1: Check all constraints of the LP in Fig. 3 except the regret constraint return set, any violated constraint that is found 2: � ← copy of� where � has cost � � � � ′ ′ 3: � ← output of algorithm from Lemma 5.2�on 4: if � � > � then � � �∉� 5: return � � ≤ � � � �∉� 6: else 7: return “Feasiblež 8: end if Fig. 4. Separation Oracle for the LP in Fig. 3 when ℓ = 0, ∀� RRST-Oracle-ZLB is given in Fig. 4. Via the ellipsoid method this (1, 4+ giv �)-r esobust a fractional solution. Using Theorem 1.3, the fact that the integrality gap of the LP we use21is ], and 2 [ that there is a(ln4+ �) ≈ 1.39- approximation for Steiner tree [8], with appropriate choice � we get ofthe following corollary: Corollary 5.4. There exists a(2.78, 12.51)-robust algorithm for Steiner tree whenℓ = 0 for all � ∈ �. Proof of Lemma 5.3. All inequalities except the regret constraint set can be checked exactly RRSTby -Oracle- ′ ′ ′ ′ ′ ZLB. Consider the tree� computed inRRST-Oracle-ZLB and d with� = 0 for� ∈ � and � = � for � � � ∉ � . The only other violated inequality RRST-Oracle-ZLB can output is the inequality ′ � � ≤ � in �∉� � � ′ ′ ′ line 5, which is equivalent to� � ≤ � (d ) + �, an inequality from the regret constraint set. Furthermore, �∈� � RRST-Oracle-ZLB only outputs this inequality when it is actually violated. So, it suices to show that if there existsd, sol such that � � > sol(d) + (4+ �)� then RRST-Oracle-ZLB outputs a violated inequality on � � �∈� line 5, i.e., inds Steiner�tresuch e that ′ � � > �. � � �∉� Suppose there existsd, sol such that � � > sol(d) + (4+ �)�. Let d be the vector obtained from d by � � �∈� replacing � with� for edges not in sol and with 0 for edges in sol. Replacing d withd can only increase � � � � − sol(d), i.e.: � � �∈� ︁ ︁ ∗ ∗ � � = � � > sol(d ) + (4+ �)� = (4+ �)� . (9) � � � �∉sol �∈� Consider the graph� made by RRST-Oracle-ZLB. We’ll partition the edges into four�sets, , � , � , � 0 � � �� ′ ′ ′ ′ ′ ′ where � = � \ (sol∪ � ), � = sol\ � , � = � \ sol, � = sol∩ � . Let �(� ) for� = � , � , � , � denote 0 � � �� 0 � � �� ′ ′ ′ � � , i.e., the total cost of the edge set � in� . Sinced has � = 0 for� ∈ sol, from(9) we get that � � � �∈� �(� ) + �(� ) > (4+ �)�. 0 � Now note that ′ � � = �(� ) + �(� ). Lemma 5.2 gives that(4+ �)�(� ) ≥ �(� ). Putting it all together, � � 0 � � � �∉� we get that: �(� ) �(� ) + �(� ) (4+ �)� � 0 � � � = �(� ) + �(� ) ≥ �(� ) + ≥ > = � . � � 0 � 0 4+ � 4+ � 4+ � �∉� ACM Trans. Algor. 16 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi 5.2 General Case for Arbitrary Lower Bounds on Edge Lengths: High Level Overview In this section, we give our main lemma (Lemma 5.5), a high-level overview of the algorithm and analysis proving this lemma, and show how the lemma implies Theorem 5.1. In the general case, the approximation guarantee given in Lemma 5.2 alone does not suice because of the ofset of (ℓ − ℓ � ). We instead rely on a stronger notion of approximation formalized in the next lemma � � � �∈� that provides simultaneous approximation guarantees on two sets of edge weights: � = � � − ℓ � + ℓ and � � � � � � ′ ′ � = ℓ − ℓ � . The guarantee on � can then be used to handle the ofset. � � � Lemma 5.5. Let � be a graph with terminals� and two sets of edge weights� and � . Let sol be any Steiner tree ′ 4 connecting� . Let Γ > 1, � > 0, and 0 < � < be ixed constants. Then there exists a constantΓ (depending on Γ , �, �) and an algorithm that obtains a collection of Steiner trealg es , at least one of which (let us call italg ) satisies: • �(alg \ sol) ≤ 4Γ · �(sol\ alg ), and � � ′ ′ ′ • � (alg ) ≤ (4Γ + � + 1+ �) · � (sol). The fact that Lemma 5.5 generates multiple solutions (but only polynomially many) is ine because as long as we can show that one of these solutions causes suicient regret, our separation oracle can just iterate over all solutions until it inds one that causes suicient regret. We give a high level sketch of the proof of Lemma 5.5 here, and defer the full details to Section 5.4. The algorithm uses the ideaalternate of minimization , alternating between a “forward phasež and a “backward phase.ž The goal of each forward phase/backward phase pair is to ke � ep(alg) approximately ixed while obtaining a net decrease in�(alg). In the forward phase, the algorithm greedily uses local search, choosing swaps that decrease � and increase� at the best “rate of exchangež between the two costs (i.e., the maximum ratio of decrease in ′ ′ � to increase in � ), until � (alg) has increased past some upper threshold. Then, in the backward phase, the ′ ′ algorithm greedily chooses swaps that decrease � while increasing � at the best rate of exchange, until � (alg) reaches some lower threshold, at which point we start a new forward phase. We guess the value of� (sol) (we can run many instances of this algorithm and generate diferent solutions based on diferent guesses for this purpose) and set the upper threshold�for(alg) appropriately so that we satisfy the approximation guarantee�for . For � we show that as long asalg is not a4Γ-diference approximation with respect to� then a forward/backward phase pair reduces �(alg) by a non-negligible amount (of course, if alg is a4Γ-diference approximation then we are done). This implies that after enough iterations, alg must be a 4Γ-diference approximation�(as alg) can only decrease by a bounded amount. To show this, we claim that while alg is not a4Γ-diference approximation, for suicientlyΓlarge the following bounds on rates of exchange hold: • For each swap in the forward phase, the ratio of decrease�(in alg) to increase in � (alg) is at least some �(alg\sol) constant � times . 1 ′ � (sol\alg) • For each swap in the backward phase, the ratio of increase �(alg in) to decrease in� (alg) is at most some �(sol\alg) constant � times . 2 ′ � (alg\sol) Before we discuss how to prove this claim, let us see why this claim implies that a forward phase/backward phase pair results in a net decrease�in (alg). If this claim holds, suppose we set the lower threshold � (for alg) to be, ′ ′ ′ say, 101� (sol). That is, throughout the backward phase, we have� (alg) > 101� (sol). This lower threshold lets us rewrite our upper bound on the rate of exchange in the backward phase in terms of the lower bound on rate of exchange for the forward phase: �(sol\ alg) �(sol\ alg) �(sol\ alg) �(sol\ alg) � ≤ � ≤ � ≤ � 2 2 2 2 ′ ′ ′ ′ ′ � (alg\ sol) � (alg) − � (sol) 100� (sol) 100� (sol\ alg) ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 17 1 �(alg\ sol) � �(alg\ sol) ≤ � = · � . 2 1 ′ ′ 4Γ 100� (sol\ alg) 400Γ� � (sol\ alg) In other words, the bound in the claim for the rate of exchange in the forward phase is larger than the bound for the backward phase by a multiplicative factor Ω(1)of· Γ. While these bounds depend on alg and thus will change with every swap, let us make the simplifying assumption that through one forward phase/backward phase pair these bounds remain constant. Then, the change �in(alg) in one phase is just the rate of exchange for that phase times the change in � (alg), which by deinition of the algorithm is roughly equal in absolute value for the forward and backward phase. So this implies that the decrease �(alg in) in the forward phase is Ω(1) · Γ times the increase in �(alg) in the backward phase, i.e., the net change across the phases is a non-negligible decrease in�(alg) ifΓ is suiciently large. Without the simplifying assumption, we can still show that the decrease in �(alg) in the forward phase is larger than the increase �(alg in) in the backward phase for large enough Γ using a much more technical analysis. In particular, our analysis will show there is a net decrease as long as: ( ) √ √ 4Γ − 1 (4Γ − 1)( Γ − 1)( Γ − 1− �)� ′ ′ � (4Γ +�+1+�) min , − (� − 1) > 0, (10) 8Γ 16(1+ �)Γ where 4(1+ �)Γ � = √ √ . ′ ′ ′ ( Γ − 1)( Γ − 1− �)(4Γ − 1)(4Γ − 1) Note that for any positiv �,e�,Γ , there exists a suiciently large valueΓ for of (10) to hold, since asΓ → ∞, we have � → 0, so that ′ ′ � (4Γ +�+1+�) (� − 1) → 0 and ( ) √ √ 4Γ − 1 (4Γ − 1)( Γ − 1)( Γ − 1− �)� min , → min{1/2, �/(4+ 4�)}. 8Γ 16(1+ �)Γ So, the same intuition holds: as long as we are willing to lose a large Γ value enough , we can make the increase in�(alg) due to the backward phase negligible compared to the decrease in the forward phase, giving us a net decrease. It remains to argue that the claimed bounds on rates of exchange hold. Let us argue the claim Γ =for 4, although the ideas easily generalize to other choices Γ. Wof e do this by generalizing the analysis giving Lemma 5.2. This analysis shows that alg if is a locally optimal solution, then �(alg\ sol) ≤ 4· �(sol\ alg), i.e.,alg is a 4-diference approximation solof . The contrapositive of this statement is that alg if is not a 4- diference approximation, then there is at least one swap that will improve it by some amount. We modify the approach of [13] by weakening the notion of locally optimal. In particular, suppose we deine aalg solution to be “approximatelyž locally optimal if at least half of the (weighted) swaps betw �ein enalg paths\ sol and paths � insol\ alg satisfy �(�) ≤ 2�(�) (as opposed to �(�) ≤ �(�) in a locally optimal solution; the choice of 2 for both constants here implies Γ = 4). Then a modiication of the analysis of the local search algorithm, losing an additional factor of 4, shows that algif is approximately locally optimal, then �(alg\ sol) ≤ 16· �(sol\ alg). The contrapositive of this statement, however, has a stronger consequence than befor alg e: if is not a 16-diference approximation, then a weighted half of the swaps satisfy �(�) > 2�(�), i.e. reduce�(alg) by at least �(�) − �(�) > �(�) − �(�)/2 = �(�)/2. The decrease in�(alg) due to all of these swaps together is at least �(alg\ sol) times some constant. In addition, ′ ′ ′ since a swap between� and � increases� (alg) by at most � (�), the total increase in � due to these swaps ACM Trans. Algor. 18 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi Algorithm 3 RRST-Oracle(� (� , �), {[ℓ ,� ]} , (x, �)) � � �∈� Input:Undirected graph � (� , �), lower and upper bounds on edge lengths{[ℓ ,� ]} , solutionx( = � � �∈� {� } , �) to the LP in Fig. 3 � �∈� 1: Check all constraints of the LP in Fig. 3 except regret constraint return set, any violated constraint that is found ′ ′ 2: � ← copy of� where � = � � − ℓ � + ℓ , � = ℓ − ℓ � � � � � � � � � � 3: alg ← output of algorithm from Lemma 5.5�on 4: for alg ∈ alg do Í Í Í 5: if � � + ℓ � − ℓ > � then � � � � � �∉alg �∈alg �∈alg � � � Í Í Í 6: return � � + ℓ � − ℓ ≤ � � � � � � �∉alg �∈alg �∈alg � � � 7: end if 8: end for 9: return “Feasiblež Fig. 5. Separation Oracle for LP in Fig. 3 is at most� (sol \ alg) times some other constant. An averaging argument then gives the rate of exchange bound for the forward phase in the claim, as desired. The rate of exchange bound for the backward phase follows analogously. This completes the algorithm and proof summary, although more details are needed to formalize these arguments. Moreover, we also need to show that the algorithm runs in polynomial time. We now formally deine our separation oracle RRST-Oracle in Fig. 5 and prove that it is an approximate separation oracle in the lemma below: Lemma 5.6. Fix anyΓ > 1, � > 0, 0 < � < 4/35 and let Γ be the constant given in Lemma 5.5. Let� = (4Γ + � + 2+ �)4Γ + 1 and � = 4Γ. Then there exists an algorithmRRST-Oracle that given a solution(x, �) to the LP in Fig. 3 either: • Outputs a separating hyperplane for the LP in Fig. 3, or • Outputs łFeasible,ž in which case x is feasible for the (non-robust) Steiner tree LP and ∀d : � � ≤ � · opt(d) + � · � . � � �∈� Proof. It suices to show that if there exists d, sol such that ︁ ︁ � � > � · sol(d) + � · �, i.e., � � − � · sol(d) > � · � � � � � �∈� �∈� then RRST-Oracle outputs a violated inequality on line 6, i.e., the algorithm inds a Steiner � such that tree ︁ ︁ ︁ � � + ℓ � − ℓ > � . � � � � � ′ ′ ′ �∉� �∈� �∈� Notice that in the inequality � � − � · sol(d) > � · �, � � �∈� ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 19 ′ ′ ′ replacing d withd where � = ℓ when � ∈ sol and � = � when � ∉ sol can only increase the left hand side. � � � � So replacing d withd and rearranging terms, we have " # ︁ ︁ ︁ ︁ ︁ ℓ � + � � > � ℓ + � · � = ℓ + (� − 1) ℓ + � · � . � � � � � � � �∈sol �∉sol �∈sol �∈sol �∈sol In particular, the regret of the fractional solution against sol is at least(� − 1) ℓ + � · �. �∈sol � ′ ′ Let � be the Steiner tree satisfying the conditions of Lemma 5.5 � =with � � − ℓ � + ℓ and � = ℓ − ℓ � . � � � � � � � � � ′ ′ ′ ′ Let � = � \(sol∪� ), � = sol\� , and � = � \ sol. Let �(� ) = ′(� � − ℓ � + ℓ ), i.e., the total weight 0 � � �∈� � � � � � ′ ′ ′ of the edges� in� . Now, note that the regret of the fractional solution against a solution using � is: edges ︁ ︁ ︁ ︁ ︁ � � + ℓ � − ℓ = (� � − ℓ � + ℓ ) − (ℓ − ℓ � ) � � � � � � � � � � � � � ′ ′ ′ ′ �∉� �∈� �∈� �∉� �∈� = �(� \ � ) − (ℓ − ℓ � ). � � � �∈� Plugging in � = sol, we then get that: ︁ ︁ �(� ) + �(� ) − (ℓ − ℓ � ) > (� − 1) ℓ + � · � . 0 � � � � � �∈sol �∈� Isolating �(� ) then gives: ︁ ︁ ︁ �(� ) > (� − 1) ℓ + � · �− (� � − ℓ � + ℓ ) + (ℓ − ℓ � ) � � � � � � � � � � �∈sol �∈� �∈� ︁ ︁ ︁ = (� − 1) ℓ + � · �− � � + (ℓ − ℓ � ). � � � � � � �∈sol �∈� �∉� 0 0 Since� = 4Γ, Lemma 5.5 along with an appropriate choice � giv of es that�(� ) ≤ ��(� ), and thus: � � " # ︁ ︁ ︁ �(� ) > (� − 1) ℓ + � · �− � � + (ℓ − ℓ � ) . � � � � � � � �∈sol �∈� �∉� 0 0 Recall that our goal is to show that �(� ) + �(� ) − (ℓ − ℓ � ) > �, i.e., that the regret of the fractional 0 � � � � �∈� solution against � is at least �. Adding�(� ) − (ℓ − ℓ � ) to both sides of the previous inequality, we can 0 � � � �∈� lower bound�(� ) + �(� ) − (ℓ − ℓ � ) as follows: 0 � � � � �∈� ACM Trans. Algor. 20 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi �(� ) + �(� ) − (ℓ − ℓ � ) 0 � � � � �∈� " # ︁ ︁ ︁ > (� − 1) ℓ + � · �− � � + (ℓ − ℓ � ) � � � � � � �∈sol �∈� �∉� 0 0 ︁ ︁ + (� � − ℓ � + ℓ ) − (ℓ − ℓ � ) � � � � � � � � �∈� �∈� ︁ ︁ ︁ ︁ � − 1 � − 1 1 = �+ ℓ + (ℓ − ℓ � ) + � � − (ℓ − ℓ � ) � � � � � � � � � � � � �∈sol �∉� �∈� �∉� 0 0 0 ︁ ︁ ︁ ︁ � − 1− � 1 � − 1 ≥ �+ ℓ + (ℓ − ℓ � ) + � � − (ℓ − ℓ � ) ≥ � . � � � � � � � � � � � � �∈sol �∉� �∈� �∈� 0 0 � Here, the last inequality holds because by our setting �, wof e have � − 1− � = 4Γ + � + 1+ �, and thus Lemma 5.5 gives that ︁ ︁ ︁ � − 1− � � − 1− � (ℓ − ℓ � ) ≤ (ℓ − ℓ � ) ≤ ℓ . � � � � � � � � � �∈sol �∈sol �∈� By using Lemma 5.6 with the ellipsoid method and the fact that the LP optimum is mrat, w most e get an (�, �)-robust fractional solution. Then, Theorem 1.3 and known approximation/integrality gap results give us the following theorem, which with appropriate choice of constants gives Theorem 5.1: Theorem 5.7. Fix anyΓ > 1, � > 0, 0 < � < 4/35 and let Γ be the constant given in Lemma 5.5. Let� = (4Γ +�+2+�)4Γ+1 and � = 4Γ. Then there exists a polynomial-time(2� ln4+�,2� ln4+ln4+�)-robust algorithm for the Steiner tree problem. Proof of Theorem 5.7. By using the ellipsoid method with Lemma 5.6 we can compute a feasible (�, �)-robust fractional solution to the Steiner tree LP (as the robust Steiner tree LP has optimummr at ).most Then, the theorem follows from Theorem 1.3, and the fact that the polytope in Fig. 3 has integrality � = 2 and gap there is a� = (ln4+ �)-approximation for the Steiner tree problem due 8]to (The [ error parameters can be rescaled appropriately to get the approximation guarantee in the theorem statement). □ Optimizing �for in Theorem 5.7 subject to the constraints in (10), we get that for a ixed (small) �, � is minimized by setting Γ ≈ 9.284+ �(�), Γ ≈ 5.621+ �(�), � ≈ 2.241+ �(�) (for monotonically increasing � , � , � 1 2 3 1 2 3 which approach 0 as� approaches 0). Plugging in these values gives Theorem 5.1. 5.3 Algorithm Description In this section we give the algorithm description DoubleAppro for x, as well as a few lemmas that motivate our algorithm’s design and certify it is eicient. We will again use local search to ind moves that are improving with respect to �. However, now our goal is to show that we can do this without blowing up the cost with resp � .ect to We can start to show this via the following lemma, which generalizes the arguments in Section 4. Informally, it says that as long as a signiicant fraction (1/�) of the swaps (rather than all the swaps) that the local search ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 21 algorithm can make between its solution � and an adversarial solution � do not improve its objective by some factor� (rather than by any amount at all),� ’s cost can still be bounded by 4�� times�’s cost. ′ ′ From Lemma 5.8 to Lemma 5.11 we will refer to the cost functions on edges �, �by instead of �, � . This is because these lemmas are agnostic to the cost functions they are applied to and will be applied with both ′ ′ ′ ′ −1 � = �, � = � and � = � , � = � in our algorithm. We also deine A,F , �, � (·), � (·) as in the proof of Theorem 4.2 for these lemmas. Lemma 5.8. Let � and � be solutions to an instance of Steiner tree with edge costs � such that if all edges in� ∩ � have their costs set to 0, then for� ≥ 1, � ≥ 1, we have ︁ ︁ �(�, �)� (�) ≥ �(�, �)� (�). �∈A,�∈� (�):� (�)≤�� (�) �∈A,�∈� (�) Then � (� \ �) ≤ 4���(� \ �). Proof. This follows by generalizing the argument in the proof of Theorem 4.2. After setting costs of edges in � ∩ � to 0, note that � (�) = � (� \ �) and � (�) = � (� \ �). Then: � (� \ �) = � (�) �∈A ︁ ︁ ≤ �(�, �)� (�) �∈A �∈� (�) ︁ ︁ = �(�, �)� (�) −1 �∈� �∈� (�) ︁ ︁ ≤ � �(�, �)� (�) −1 �∈� �∈� (�):� (�)≤�� (�) ︁ ︁ ≤ �� �(�, �)� (�) −1 �∈� �∈� (�):� (�)≤�� (�) ︁ ︁ ≤ �� �(�, �)� (�) ≤ 4���(� \ �). −1 �∈� �∈� (�) Corollary 5.9. Let � , � be solutions to an instance of Steiner tree with edge costs � such that for parameters � ≥ 1, � ≥ 1, � (� \ �) > 4���(� \ �). Then after setting the cost of all edges in� ∩ � to 0, ︁ ︁ � − 1 �(�, �)� (�) > �(�, �)� (�). �∈A,�∈� (�):� (�)>�� (�) �∈A,�∈� (�) The corollary efectively tells us that � (if � \ �) is suiciently larger than � (� \ �), then there are many local swaps between � in� and � in� that decrease � (�) by a large fraction �of(�). The next lemma then shows that one of these swaps also does not increase � (�) by a large factor (even if instead of swapping �, win e swap in an approximation�),ofand reduces � (�) by a non-negligible amount. Lemma 5.10. Let � and � be solutions to an instance of Steiner tree with two sets of edge costs, � and � , such that for parameter Γ > 1, � (� \ �) > 4Γ · � (� \ �). Fix any0 < � < Γ − 1. Then there exists a swap ′ ′ (1+�)� (�)−� (�) 4(1+�)Γ � (�\� ) √ √ between � ∈ A and a path � between two vertices in� such that ≤ · and � (�)−(1+�)� (�) � (� \�) ( Γ−1)( Γ−1−�) � (�) − (1+ �)� (�) ≥ � (� \ �). ACM Trans. Algor. 22 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi Proof. We use an averaging argument to prove the lemma. Consider the quantity ′ ′ 1 �(�, �)[(1+ �)� (�) − � (�)] �∈A,�∈� (�):� (�)> Γ� (�),� (�)−(1+�)� (�)≥ � (� \�) � = , �(�, �)[� (�) − (1+ �)� (�)] �∈A,�∈� (�):� (�)> Γ� (�),� (�)−(1+�)� (�)≥ � (� \�) which is the ratio of the weighted average of incr�ease to the inweighted average of decrease in � over all swaps where � (�) > Γ� (�) and � (�) − (1+ �)� (�) ≥ � (� \ �). For any edge � in� ∩ �, it is also a subpath � ∈ A ∩ F for which the only � ∈ A such that � ∪ � \ � is feasible�is = �. So for all such � we can assume that � is deined such that �(�, �) = 1, �(�, �) = 0 for� ≠ �, and �(�, �) = 0 for� ≠ �. Clearly� (�) > Γ� (�) does not hold, so no swap with a positiv � value e in either sum involves edges in � ∩ �. So we can now set the cost with respect to both�, � of edges in � ∩ � to 0, and doing so does not afect the quantity�. Then, the numerator can be upper bounded by 4(1+ �)� (� \ �). For the denominator, we irst observe that �(�, �)[� (�) − (1+ �)� (�)] ≥ �∈A,�∈� (�):� (�)> Γ� (�),� (�)−(1+�)� (�)≥ � (� \�) �(�, �)[� (�) − (1+ �)� (�)]− �∈A,�∈� (�):� (�)> Γ� (�) �(�, �)[� (�) − (1+ �)� (�)]. (11) �∈A,�∈� (�):� (�)−(1+�)� (�)< � (� \�) The second term on the right-hand side of (11) is upper bounded by: �(�, �)[� (�) − (1+ �)� (�)] ≤ �∈A,�∈� (�):� (�)−(1+�)� (�)< � (� \�) 1 1 �(�, �)� (� \ �) ≤ � (� \ �). � � �∈A,�∈� (�):� (�)−(1+�)� (�)< � (� \�) The inequality follows because there are at � most diferent� ∈ A, and for each one we have �(�, �) = 1. �∈� We next use Corollary 5.9 (setting both parameters toΓ) to get the following lower bound on the irst term in (11): ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 23 �(�, �)[� (�) − (1+ �)� (�)] �∈A,�∈� (�):� (�)> Γ� (�) 1+ � ≥ �(�, �)[� (�) − � (�)] �∈A,�∈� (�):� (�)> Γ� (�) Γ − 1− � = √ �(�, �)� (�) �∈A,�∈� (�):� (�)> Γ� (�) √ √ ( Γ − 1)( Γ − 1− �) ≥ �(�, �)� (�) �∈A,�∈� (�) √ √ ( Γ − 1)( Γ − 1− �) = � (� \ �). √ √ ( Γ−1)( Γ−1−�) This lower bounds the denominator �ofby − 1/� · � (� \ �). By proper choice of �, for suiciently large � we can ignore the1/� term. Then, combining the bounds implies�that is at most ′ ′ 4(1+� )Γ � (�\� ) √ √ · . In turn, one of the swaps being summed over in � satisies the lemma statement.□ � (� \�) ( Γ−1)( Γ−1−� ) We now almost have the tools to state our algorithm and prove Lemma 5.5. However, the local search process is now concerned with two edge costs, so just considering adding the shortest path with resp � beetw ct toeen each pair of vertices and deleting a subset of vertices in the induced cycle will not suice. We instead use the following lemma: ′ ′ Lemma 5.11. Given a graph� = (� , �) with edge costs� and � , two vertices� and �, and input parameter� , ′ ′ let � be the shortest path from � to � with respect to� whose cost with respect to� is at most� . For all � > 0, there is a polynomial time algorithm that inds a path from � to � whose cost with respect to� is at most� (�) and whose ′ ′ cost with respect to� is at most(1+ �)� . Proof. If all edge lengths with respect�toare multiples of Δ, an optimal solution can be found in time poly(|� |,|�|,� /Δ) via dynamic programming: ℓLet(�, �) be the length of the shortest path from � to � with respect to � whose cost with respect to� is at most�· Δ. Usingℓ(�, �) = 0 for all � and the recurrence ℓ(�, �) ≤ ℓ(�, �− (� /Δ)) + � for edge(�, �), we can compute ℓ(�, �) for all �, �and use backtracking from �� �� ′ ′ ℓ(�,� ) to retrieve� in poly(|� |,|�|,� /Δ) time. To get the runtime down to polynomial, we use a standard rounding trick, rounding � each down to the nearest multiple�of � /|� |. After rounding, the runtime of the dynamic programming algorithm poly(|�is|,|�|, ) = �� /|� | 1 ′ poly(|� |,|�|, ). Any path has at most|� | edges, and so its cost decreases by at most�� in this rounding process, ′ ′ i.e., all paths considered by the algorithm have cost with resp �ectoftoat most(1+ �)� . Lastly, since �’s cost with respect to� only decreases,� (�) still upper bounds the cost of the shortest path considered by the algorithm with respect�to. □ The idea is to run a local search with respe�ctstarting to with a good approximation with resp�e.ctOur to algorithm alternates between a “forwardž and “backwardž phase. In the forward phase, we use Lemma 5.11 to decide which paths can be added to the solution in local search moves. The local search takes any swap that causes both �(alg) and � (alg) to decrease if any exists. Otherwise, it picks the swap betw � ∈ealg en and � that ′ ′ � (�)−� (�) ′ ′ among all swaps where�(�) < �(�) and � (�) ≤ � (sol) minimizes the ratio (we assume we know the �(�)−�(�) ′ ′ value of� (sol), as can guess many values, and our algorithm will work for the right value � (sol for)). ACM Trans. Algor. 24 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi ′ ′ Algorithm 4 DoubleApprox(� = (� , �), �, � , Γ, Γ , �) ′ ′ � ′ Input: A graph � = (� , �) with terminal �setand cost functions �, � for which all � are a multiple of� , ′ ′ ′ ′ ′ � such that � ∈ [�(sol),(1+ �) · �(sol)], � such that � ∈ [� (sol),(1+ �) · � (sol)], constants Γ, Γ , �. 1: �← 0 (0) ′ ′ 2: alg ← �-approximation of optimal Steiner tree with resp � efor ct to � < 4Γ (�) (�−2) 3: while �= 0 or �(alg ) < �(alg ) do max � � � log � (�) 1+� min � � � 4: for � ∈ {min � ,(1+�) min � , . . .(1+�) min� } do {Iterate over guesses for�(alg \ sol)} � � � � � � (�+1) (�) 5: alg ← alg {Forward phase} (�+1) (�+1) ′ ′ ′ (�) 6: while � (alg ) ≤ (4Γ + �)� and �(alg ) > �(alg ) − �/2 do � � (�+1) (�+1) ′ ′ 1 7: (alg , ����) ← GreedySwap(alg , �, �, � , �) � � 10� 8: if ����= 1 then 9: break while loop starting on line 6 10: end if 11: end while (�+2) (�+1) 12: alg ← alg {Backward phase} � � (�+2) ′ ′ ′ 13: while � (alg ) ≥ 4Γ � do (�+2) (�+2) ′ � ′ 14: (alg ,∼) ← GreedySwap(alg , � , �, �, � ) � � 15: end while 16: end for (�+2) (�+2) 17: alg ← argmin (�+2) �(alg ) alg 18: �← �+ 2 19: end while (�) 20: return all values of alg stored for any value of �, � ′ ′ Fig. 6. Algorithm DoubleApprox, which finds alg such that �(alg\ sol) ≤ � (1) · �(sol\ alg) and � (alg) ≤ � (1) · � (sol) If the algorithm only made swaps of this form, how�e(valg er, ) might become a very poor approximation ′ ′ ′ ′ ′ of� (sol). To control for this, when � (alg) exceeds (4Γ + �) · � (sol) for some constantΓ > 1, we begin a “backward phasež: We take the opposite approach, greedily choosing either swaps that improve�band oth� or �(�)−�(�) ′ ′ ′ that improve� and minimize the ratio , until � (alg) has been reduced by at least� · � (sol). At this ′ ′ � (�)−� (�) point, we begin a new forward phase. The intuition for the analysis is as follows: If, throughout a forwar �(alg d phase \ sol,) ≥ 4Γ · �(sol\ alg), Lemma 5.10 tells us that there is swap where the increase� in (alg) will be very small relative to the decrease in �(alg). (Note that our goal is to reduce the cost of �(alg\ sol) to something belowΓ4·�(sol\ alg).) Throughout ′ ′ ′ ′ ′ ′ the subsequent backward phase, we have � (alg) > 4Γ ·� (sol), which implies � (alg\sol) > 4Γ ·� (sol\alg). So Lemma 5.10 also implies that the total increase �(alg in) will be very small relative to the decrease � (alg in). Since the absolute change in � (alg) is similar between the two phases, one forward and one backward phase should decrease�(alg) overall. The formal description of the backward and forward phase is given as algorithm DoubleApprox in Figure 6. For the lemmas/corollaries in the following section, we implicitly assume that we kno � and w values � of satisfying the conditions DoubleAppro of x. When we conclude by proving Lemma 5.5, we will simply call ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 25 ′ ′ Algorithm 5 GreedySwap(alg, �, � , � , �) ′ ′ ′ ′ Input: Solution alg, cost functions �, � on the edges, � ∈ [� (sol),(1+�)� (sol)], minimum improvement per swap � 1: �����← ∅ ′ 2 ⌈log � ⌉ 1+� 2: for � ∈ {1, 1+ �,(1+ �) , . . . ,(1+ �) } do 3: for �, �∈ alg do ′ ′ ˆ ˆ 4: Find a(1+�)-approximation � of the shortest path� from� to �with respect to� such that � (�) ≤ � , 5: �∩ alg = {�, �} (via Lemma 5.11) 6: for all maximal � ⊆ alg such that alg∪ �\ � is feasible � (�,) − � (�) ≥ � do 7: �����← �����∪{(�, �)} 8: end for 9: end for 10: end for 11: if ����� = ∅ then 12: return (alg, 1) 13: end if ′ ′ � (�)−� (�) ∗ ∗ 14: (� , � ) ← argmin (�,�)∈����� � (�)−� (�) ∗ ∗ 15: return (alg∪ � \ � , 0) Fig. 7. Algorithm GreedySwap, which finds a swap with the properties described in Lemma 5.10 ′ ′ DoubleApprox for every reasonable value of �, � that is a power of+1 �, and one of these runs will hav �,e� satisfying the conditions. Furthermore, there are multiple error parameters in our algorithm and its analysis. For simplicity of presentation, we use the same�value for all error parameters in the algorithm and its analysis. 5.4 Algorithm Analysis and Proof of Lemma 5.5 In this section we analyze DoubleApprox and give the proof of Lemma 5.5. We skip the proof of some technical lemmas whose main ideas have been covered already. We irst make some observations. The irst lets us relate the decrease in cost of a solution alg to the decrease in the cost of alg\ sol. Lemma 5.12. Let alg, alg , sol be any Steiner tree solutions to a given instance. Then ′ ′ ′ �(alg) − �(alg ) = [�(alg\ sol) − �(sol\ alg)] − [�(alg \ sol) − �(sol\ alg )]. ′ ′ Proof. By symmetry, the contribution of edgesalg in∩ alg and edges in neither alg nor alg to both the left and right hand side of the equality is zero, so it suices to show that all alg edges ⊕ alg incontribute equally to the left and right hand side. ′ ′ Consider any� ∈ alg\ alg . Its contribution �to(alg) − �(alg ) is�(�). If� ∈ alg\ sol, then � contributes ′ ′ �(�) to �(alg\ sol)− �(sol\ alg) and 0 to−[�(alg \ sol)− �(sol\ alg )]. If� ∈ alg∩ sol, then � contributes ′ ′ 0 to �(alg\ sol) − �(sol\ alg) and �(�) to −[�(alg \ sol) − �(sol\ alg )]. So the total contribution�of to ′ ′ [�(alg\ sol) − �(sol\ alg)] − [�(alg \ sol) − �(sol\ alg )] is�(�). ′ ′ Similarly, consider � ∈ alg \ alg. Its contribution �to(alg) − �(alg ) is−�(�). If� ∈ sol \ alg, then � ′ ′ contributes−�(�) to �(alg \ sol) − �(sol \ alg) and 0 to [�(alg \ sol) − �(sol \ alg )]. If� ∉ sol, then � ACM Trans. Algor. 26 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi ′ ′ contributes0 to �(alg\sol)−�(sol\alg) and−�(�) to−[�(alg \sol)−�(sol\alg )]. So the total contribution ′ ′ of� to [�(alg\ sol) − �(sol\ alg)] − [�(alg \ sol) − �(sol\ alg )] is−�(�). □ Lemma 5.12 is useful because Lemma 5.10 relates the ratio of change �, �in to �(alg\ sol), but it is diicult to track how �(alg\ sol) changes as we make swaps that improve�(alg). For example,�(alg\ sol) does not necessarily decrease with swaps that cause �(alg) to decrease (e.g. consider a swap that adds a light edge not in sol and removes a heavy edge insol). Whenever �(alg\ sol) ≫ �(sol\ alg) (if this doesn’t hold, we have a good approximation and are done), �(alg\ sol) and �(alg\ sol)− �(sol\ alg) are of by a multiplicative factor that is very close to 1, and thus we can relate the ratio of changes in Lemma �(7alg to \ sol) − �(sol\ alg) instead at a small loss in the constant, and by Lemma 5.12 changes in this quantity are much easier to track over the course of the algorithm, simplifying our analysis greatly. The next lemma lets us assume that any backward phase uses polynomially many calls Greed to ySwap. ′ ′ ′ ′ ′ Lemma 5.13. Let � be any value such that � ∈ [� (sol),(1+ �)� (sol)], and suppose we round all � up to the � ′ ′ ′ nearest multiple of � for some 0 < � < 1. Then any �-approximation ofsol with respect to� using the rounded� values is an�(1+ 2�)-approximation ofsol with respect to� using the original edge costs. Proof. This follows because the rounding can only increase the cost of any solution, and the cost increases by ′ ′ ′ at most � � ≤ �(1+ �)� (sol) ≤ 2��(sol). □ Via this lemma, we will assume � ar all e already rounded. The following two lemmas formalize the intuition given in Section 5.2; in particular, by using bounds on the “rate of exchange,ž they show that the decr �ease in in the forward phase can be lower bounded, and the increase �in in the backward phase can be upper bounded. Their proofs are highly technical and largely follow the intuition given in Section 5.2, so we defer them to the following section. Lemma 5.14 (Forward Phase Analysis). For any even �in algorithmDoubleApprox, let � be the power of(1+�) (�+1) (�) (�) timesmin � such that � ∈ [�(alg \sol),(1+�)�(alg \sol)]. Suppose all values of alg and the inal value � � � (�+1) (�+1) (�) (�) (�) of alg inDoubleApprox satisfy�(alg \sol) > 4Γ·�(sol\alg ) and �(alg \sol) > 4Γ·�(sol\alg ). � � (�+1) (�) Then for 0 < � < 2/3− 5/12Γ, the inal values of alg , alg satisfy ( ) √ √ 4Γ − 1 (4Γ − 1)( Γ − 1)( Γ − 1− �)� (�+1) (�+1) (�) �(alg ) − �(alg ) ≥ min , · �(alg \ sol). � � 8Γ 16(1+ �)Γ Lemma 5.15 (Backward Phase Analysis). Fix any even�+ 2 in algorithmDoubleApprox and any value of �. (�+1) (�+2) ′ ′ � (alg )−� (alg ) (�+2) (�+2) (�+2) � � Suppose all values of alg satisfy�(alg \ sol) > 4Γ · �(sol\ alg ). Let � = . Then � � � � (sol) for 4(1+ �)Γ � = √ √ , ′ ′ ′ ( Γ − 1)( Γ − 1− �)(4Γ − 1)(4Γ − 1) (�+1) (�+2) the inal values of alg , alg satisfy � � (�+2) (�+1) (�+1) � � �(alg ) − �(alg ) ≤ (� − 1) · �(alg \ sol). � � � By combining the two preceding lemmas, we can show that as long algasis a poor diference approximation, (�) a combination of the forward and backward phase collectively de�cr(alg ease ) ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 27 Corollary 5.16. Fix any positive even value of �+ 2 in algorithmDoubleApprox, and let � be the power of (1+�) (�+1) (�) (�) timesmin � such that � ∈ [�(alg \sol),(1+�)�(alg \sol)]. Suppose all values of alg and the inal value � � � (�+1) (�+1) (�) (�) (�) of alg inDoubleApprox satisfy�(alg \sol) > 4Γ·�(sol\alg ) and �(alg \sol) > 4Γ·�(sol\alg ). � � ′ (�+2) (�) Then for 0 < � < 2/3− 5/12Γ and � as deined in Lemma 5.15, the inal values ofalg , alg satisfy (�) (�+2) �(alg ) − �(alg ) ≥ " ( ) # √ √ 4Γ − 1 (4Γ − 1)( Γ − 1)( Γ − 1− �)� ′ ′ (�+1) � (4Γ +�+1+�) min , − (� − 1) · �(alg \ sol). 8Γ 16(1+ �)Γ (�+2) (�) (�) (�+2) Proof. It suices to lower bound�(alg )− �(alg ) for this value �of , since�(alg )− �(alg ) must be at least this large. After rescaling � appropriately, we have (�+1) (�+2) (�+1) ′ ′ ′ ′ ′ � (alg ) − � (alg ) ≤ � (alg ) ≤ (4Γ + � + 1+ �)� (sol), � � � ′ ′ because the algorithm can increase its cost with resp�ectby toat most (1+ �)� (sol) in any swap in the forward ′ ′ phase (by line 5 of GreedySwap, which bounds the increase � (�) ≤ � ≤ (1+ �)� (sol)), so it exceeds the ′ ′ ′ ′ threshold(4Γ +�)� ≤ (4Γ +�)(1+�)� (sol) on line 13 of DoubleApprox by at most this much. Then applying (�+1) (�+1) (�+2) (�) ′ Lemma 5.14 to �(alg ) − �(alg ) and Lemma 5.15 to �(alg ) − �(alg ) (using� ≤ 4Γ + � + 1+ �) � � � gives: (�+2) (�+1) (�+1) (�+2) (�) (�) �(alg ) − �(alg ) = [�(alg ) − �(alg )] + [�(alg ) − �(alg )] � � � � " ( ) # √ √ 4Γ − 1 (4Γ − 1)( Γ − 1)( Γ − 1− �)� ′ ′ (�+1) � (4Γ +�+1+�) ≥ min , − (� − 1) · �(alg \ sol). 8Γ 16(1+ �)Γ Now, we can chain Corollary 5.16 multiple times to show that after suiciently many iterations DoubleAp of - (�) (�) prox, if all intermediate values alg ofare poor diference approximations, over all these iterations �(alg ) � max � � � must decrease multiplicatively by more than , which is a contradiction as this is the ratio between an min � � � (�) upper and lower bound on the cost of every Steiner tree. In turn, some intermediate value algof must have been a good diference approximation: ′ ′ Lemma 5.17. Suppose Γ, Γ , �, and � are chosen such that for � as deined in Lemma 5.15, ( ) √ √ 4Γ − 1 (4Γ − 1)( Γ − 1)( Γ − 1− �)� ′ ′ � (4Γ +�+1+�) min , − (� − 1) > 0, 8Γ 16(1+ �)Γ and 0 < � < 2/3− 5/12Γ. Let � equal n √ √ o ′ ′ (4Γ−1)( Γ−1)( Γ−1−�)� 4Γ−1 � (4Γ +�+1+�) min , − (� − 1) 8Γ 16(1+�)Γ ′ ′ 4Γ−1 � (4Γ +�+1+�) 1+ (� − 1) 4Γ � max � � � ∗ Assume � > 0 and let � = 2(⌈log /log(1+ �)⌉ + 1). Then there exists some intermediate value alg assigned min� � � (�) ∗ ∗ ′ ∗ to alg by the algorithm for some� ≤ � and � such that �(alg \ sol) ≤ 4Γ�(sol \ alg ) and � (alg ) ≤ ′ ′ (4Γ + � + 1+ �)� (sol). ACM Trans. Algor. 28 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi (�) (�) Proof. Let Φ(�) := �(alg \ sol)− �(sol\ alg ) for even�. Assume that the lemma is false. Since algorithm (�) ′ ′ ′ DoubleApprox guarantees that � (alg ) ≤ (4Γ + � + 1 + �)� (sol), if the lemma is false it must be that (�) (�) for all �and �, �(alg \ sol) > 4Γ�(sol\ alg ). By Corollary 5.16, and the assumption on Γ, Γ , �, � in the � � (�) (�−2) statement of this lemma, for�all �(alg ) < �(alg ), so the while loop on Line 3DoubleAppro of x never (�) breaks. This means that for all ev�en≤ �, alg is assigned a valueDoubleAppro in x. We will show that this (�) (�) (�) 4Γ−1 implies that for the inal value algof, Φ(�) = �(alg \ sol) − �(sol\ alg ) < min � . The inequality � � 4Γ (�) (�) (�) (�) 4Γ−1 (�) �(alg \ sol) > 4Γ�(sol\ alg ) implies �(alg \ sol)− �(sol\ alg ) > �(alg \ sol). The value of 4Γ (�) (�) (�) �(alg \ sol) must be positive (otherwise �(alg \ sol) ≤ 4Γ�(sol\ alg ) trivially), and hence it must be at leastmin � . These two inequalities conlict, which implies a contradiction. Hence the lemma must be true. � � We now analyze how the quantityΦ(�) changes under the assumption that the lemma is false. Of course (�) (�+2) Φ(0) ≤ � max � . Lemma 5.12 gives thatΦ(�) − Φ(�+ 2) is exactly equal �to(alg ) − �(alg ). For the value � � (�) (�) of� such that � ∈ [�(alg \ sol),(1+ �)�(alg \ sol)], by Corollary 5.16 and the assumption that the lemma is false, for ev�en we have Φ(�) − Φ(�+ 2) " ( ) # √ √ 4Γ − 1 (4Γ − 1)( Γ − 1)( Γ − 1− �)� ′ ′ (�+1) � (4Γ +�+1+�) ≥ min , − (� − 1) · �(alg \ sol) 8Γ 16(1+ �)Γ " ( ) # √ √ 4Γ − 1 (4Γ − 1)( Γ − 1)( Γ − 1− �)� ′ ′ � (4Γ +�+1+�) ≥ min , − (� − 1) 8Γ 16(1+ �)Γ (�+1) (�+1) · [�(alg \ sol) − �(sol\ alg )]. (12) � � Lemma 5.15 (using the proof from Corollary 5.16 that � ≤ 4Γ + � + 1 + �), Lemma 5.12, and the inequality (�+1) (�+1) (�+1) 4Γ−1 �(alg \ sol) − �(sol\ alg ) > �(alg \ sol) give: � � � 4Γ (�+1) (�+1) Φ(�+ 2) − [�(alg \ sol) − �(sol\ alg )] � � ′ ′ � (4Γ +�+1+�) (�+1) ≤(� − 1)])�(alg \ sol) 4Γ − 1 ′ ′ (�+1) � (4Γ +�+1+�) (�+1) < (� − 1)])[�(alg \ sol) − �(sol\ alg )] 4Γ (�+1) (�+1) =⇒ [�(alg \ sol) − �(sol\ alg )] > Φ(�+ 2). � � ′ ′ 4Γ−1 � (4Γ +�+1+�) 1+ (� − 1)) 4Γ Plugging this into (12) gives: √ √ −1 ′ ′ (4Γ−1)( Γ−1)( Γ−1−�)� 4Γ−1 � (4Γ +�+1+�) min{ , }− (� − 1) 8Γ © 16(1+�)Γ ª Φ(�+ 2) < ­1+ ® Φ(�) 4Γ−1 � (4Γ+�+1+�) 1+ (� − 1) 4Γ « ¬ −1 = (1+ �) Φ(�). Applying this inductively gives: −�/2 −�/2 Φ(�) ≤ (1+ �) Φ(0) ≤ (1+ �) � max � . � max � � � −1 Plugging in �= � = 2(⌈log /log(1+ �)⌉ + 1) givesΦ(�) ≤ (1+ �) min� < min� as desired. □ � � � � min� � � ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 29 To prove Lemma 5.5, we now just need to certify that it suices to guess multiple values �, �of , and that the algorithm is eicient. ′ ′ ′ ′ Proof of Lemma 5.5. If we have� ∈ [�(sol),(1+ �) · �(sol)] and � ∈ [� (sol),(1+ �) · � (sol)], and the � � ′ values are multiples of � , then the conditions of DoubleApprox are met. As long as (10) holds, that is: ( ) √ √ 4Γ − 1 (4Γ − 1)( Γ − 1)( Γ − 1− �)� ′ ′ � (4Γ +�+1+�) min , − (� − 1) > 0, (10) 8Γ 16(1+ �)Γ then we have � > 0 in Lemma 5.17, thus giving the approximation guarantee in Lemma 5.5. For any p �ositiv , �,Γ , e ′ ′ ′ � (4Γ +�+1+�) there exists a suiciently large value Γ for of (10) to hold, since asΓ → ∞, we have � → 0,(� −1) → 0, n √ √ o (4Γ−1)( Γ−1)( Γ−1−�)� 4Γ−1 ′ and min , → min{1/2, �/(4+ 4�)}, so for any ixed choice �of , �,Γ , a suiciently 8Γ 16(1+�)Γ large value ofΓ causes � > 0 to hold as desired. � max � � � ⌈log ⌉ 1+� min � � � Some value in{min � ,(1+�) min � , . . .(1+�) min� } satisies the conditions�for , and there � � � � � � � max� � ⌈log ⌉ ′ ′ 1+� ′ min � are polynomially many values in this set. The same holds � in for{min� , . . .(1+ �) min� }. � � � � So we can run DoubleApprox for all pairs�of , � (paying a polynomial increase in runtime), and output the union of all outputs, giving the guarantee of Lemma 5.5 by Lemma 5.17. For�each we choose, we can round the � ′ edge costs to the nearest multiple of� before running DoubleApprox, and by Lemma 5.13 we only pay an additiv �e(�) in the approximation factor with resp�ect. Finally to , we note that by setting � appropriately in the statement of Lemma 5.17, we can achieve the approximation guarantee stated in Lemma 5.5 for a diferent value of�. Then, we just need to show DoubleApprox runs in polynomial time. Lemma 5.17 shows that the while loop of Line 3 only needs to be run a polynomial numb �) er of(times. The while loop for the forward phase runs at most � (� ) times since each callGreed to ySwap decreases the cost with respect to� by at least �, and 10� once the total decrease exceeds�/2 the while loop breaks. The while loop for the backward phase runs at most ′ ′ (� + 1+ �) times, since the initial cost with resp � is ect at tomost(4Γ + � + 1+ �)� , the while loop breaks ′ ′ ′ when it is less than Γ 4� , and each call toGreedySwap improves the cost by at least � . Lastly,GreedySwap can be run in polynomial time as the maximal � which need to be enumerated can be computed in polynomial time as described in Section 4. □ 5.5 Proofs of Lemmas 5.14 and 5.15 (�+1) (�+1) (�+1) Proof of Lemma 5.14. Let alg denote the value ofalg after�calls toGreedySwap on alg , and � � �,� (�+1) (�+1) (�) let� be the total number of calls of GreedySwap on alg . Then alg is the inal valuealg of , and the �,0 (�+1) (�+1) (�+1) inal value of alg isalg . Any timeGreedySwap is invoked on alg , by line 6 of DoubleApprox and � � �,� (�) the assumption� ≤ (1+ �)�(alg \ sol) in the lemma statement, we have: 1+ � (�+1) (�) (�) (�) �(alg ) > �(alg ) − �/2 ≥ �(alg ) − �(alg \ sol). (�) (�) Then, by Lemma 5.12 and the assumption�(alg \ sol) > 4Γ · �(sol\ alg ) in the lemma statement, we have: ACM Trans. Algor. 30 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi (�+1) (�+1) (�+1) �(alg \ sol) ≥ �(alg \ sol) − �(sol\ alg ) � � � (�+1) (�) (�) (�) = �(alg \ sol) − �(sol\ alg ) + �(alg ) − �(alg ) 1+ � (�) (�) (�) ≥ �(alg \ sol) − �(sol\ alg ) − �(alg \ sol) 1− � 1 (�) ≥ − �(alg \ sol), 2 4Γ (�+1) 2 1 For � < 2/3− 5/12Γ, �(alg \ sol)/� ≥ �. So by Lemma 5.10 GreedySwap never outputs a tuple where 10� �����= 1, and thus we can ignore lines 8-10DoubleAppro of x under the conditions in the lemma statement. (�+1) (�+1) (�) Suppose alg satisies �(alg ) ≤ �(alg ) − �/2, a condition that causes the while loop at line 6 of �,� �,� DoubleApprox to exit and the forward phase to end. Then (�+1) (�) (�) �(alg ) − �(alg ) ≥ �/2 ≥ �(alg \ sol) �,� (�) (�) ≥ [�(alg \ sol) − �(sol\ alg )] (�+1) (�+1) = [�(alg \ sol) − �(sol\ alg )] �,0 �,0 (�+1) (�+1) ≥ [�(alg \ sol) − �(sol\ alg )] �,� �,� 4Γ − 1 (�+1) ≥ �(alg \ sol). �,� 8Γ (�+1) (�+1) The second-to-last inequality is using Lemma 5.12, which �implies (alg \ sol) − �(sol \ alg ) is �,� �,� (�+1) (�+1) decreasing with swaps, and the last inequality holds by the assumption �(alg \ sol) > 4Γ · �(sol\ alg ) � � (�+1) (�) in the lemma statement. Thus �if(alg ) ≤ �(alg ) − �/2, the lemma holds. �,� (�+1) (�) Now assume instead that�(alg ) > �(alg )− �/2 when the forward phase ends. We want a lower bound �,� on �−1 (�+1) (�+1) (�+1) (�+1) �(alg ) − �(alg ) = [�(alg ) − �(alg )]. �,0 �,� �,� �,�+1 �=0 (�+1) (�+1) We bound each �(alg ) − �(alg ) term using Lemma 5.10 and Lemma 5.11. By Lemma 5.10 and the �,� �,�+1 (�+1) (�+1) assumption in the lemma statement that �(alg \ sol) > 4Γ · �(sol\ alg ), we know there exists a swap � � (�+1) between � ∈ alg and � ∈ sol such that �,� (�+1) ′ ′ � (sol\ alg ) (1+ �)� (�) − � (�) 4(1+ �)Γ �,� ≤ · . √ √ (�+1) �(�) − (1+ �)�(�) ( Γ − 1)( Γ − 1− �) �(alg \ sol) �,� ′ ′ ′ By Lemma 5.11, we know that when � is set to a value in [� (�),(1+ �) · � (�)] in line 2Greed of ySwap, the ′ ′ ′ ′ ′ algorithm inds a path � between the endpoints of � such that �(� ) ≤ (1+ �)�(�) and � (� ) ≤ (1+ �)� (�). ′ ∗ ∗ Thus (�, � ) ∈ ����� and the swap (� , � ) chosen by the ( �+ 1)th call toGreedySwap satisies: ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 31 ′ ∗ ′ ∗ ′ ′ ′ ′ ′ � (� ) − � (� ) � (� ) − � (�) (1+ �)� (�) − � (�) ≤ ≤ ≤ ∗ ∗ �(� ) − �(� ) �(�) − �(�) �(�) − (1+ �)�(�) (�+1) � (sol\ alg ) 4(1+ �)Γ �,� √ √ · . (�+1) ( Γ − 1)( Γ − 1− �) �(alg \ sol) �,� (�+1) ′ ′ Rearranging terms and observing that� (sol) ≥ � (sol\ alg ) gives: �,� (�+1) (�+1) ∗ ∗ �(alg ) − �(alg ) = �(� ) − �(� ) �,� �,�+1 √ √ ′ ∗ ′ ∗ ( Γ − 1)( Γ − 1− �) � (� ) − � (� ) (�+1) ≥ · �(alg \ sol) �,� 4(1+ �)Γ � (sol) √ √ (�+1) (�+1) ′ ′ � (alg ) − � (alg ) ( Γ − 1)( Γ − 1− �) �,�+1 �,� (�+1) = · �(alg \ sol) . �,� 4(1+ �)Γ � (sol) This in turn gives: ACM Trans. Algor. 32 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi �−1 (�+1) (�+1) (�+1) (�+1) �(alg ) − �(alg ) = [�(alg ) − �(alg )] �,0 �,� �,� �,�+1 �=0 √ √ �−1 ( Γ − 1)( Γ − 1− �) (�+1) ≥ · �(alg \ sol) �,� 4(1+ �)Γ �=0 (�+1) (�+1) ′ ′ � (alg ) − � (alg ) �,� �,�+1 � (sol) √ √ �−1 ( Γ − 1)( Γ − 1− �) (�+1) (�+1) ≥ [�(alg \ sol) − �(sol\ alg )] �,� �,� 4(1+ �)Γ �=0 (�+1) (�+1) ′ ′ � (alg ) − � (alg ) �,�+1 �,� � (sol) √ √ �−1 ( Γ − 1)( Γ − 1− �) (�+1) (�+1) ≥ [�(alg \ sol) − �(sol\ alg )] �,� �,� 4(1+ �)Γ �=0 (�+1) (�+1) ′ ′ � (alg ) − � (alg ) �,�+1 �,� � (sol) √ √ (4Γ − 1)( Γ − 1)( Γ − 1− �) (�+1) ≥ �(alg \ sol) 2 �,� 16(1+ �)Γ (�+1) (�+1) ′ ′ �−1 � (alg ) − � (alg ) �,�+1 �,� � (sol) �=0 √ √ (4Γ − 1)( Γ − 1)( Γ − 1− �) (�+1) = �(alg \ sol) 2 �,� 16(1+ �)Γ (�+1) (�+1) ′ ′ � (alg ) − � (alg ) �,� �,0 � (sol) √ √ (4Γ − 1)( Γ − 1)( Γ − 1− �)� (�+1) ≥ �(alg \ sol). 2 �,� 16(1+ �)Γ (�+1) (�+1) The third-to-last inequality is using Lemma 5.12, which �(implies alg \ sol)− �(sol\ alg ) is decreasing �,� �,� (�+1) (�+1) with swaps. The second-to-last inequality is using the assumption �(alg \ sol) > 4Γ· �(sol\ alg ) in the � � statement the lemma. The last inequality uses the fact that the while loop on line DoubleAppro 6 of x terminates (�+1) (�+1) ′ ′ ′ (�) because � (alg ) > (4Γ + �)� (by the assumption that�(alg ) > �(alg ) − �/2), and lines 2 and 13 of �,� �,� (�+1) ′ ′ ′ DoubleApprox give that� (alg ) ≤ 4Γ � . □ �,0 (�+2) ′ ′ ′ ′ ′ Proof of Lemma 5.15. Because � (alg ) > 4Γ � in every backwards phase and� ≥ � (sol), by Lemma 5.10 (�+2) whenever GreedySwap is called on alg in line 14DoubleAppro of x, at least one swap is possible. Since all ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 33 � ′ � ′ edge costs are multiples of� , and the last argument toGreedySwap is � (which lower bounds the decrease � � (�+2) in� (alg ) due to any improving swap), GreedySwap always makes a swap. (�+2) (�+2) (�+2) Let alg denote the value ofalg after�calls toGreedySwap on alg , and let� be the total number �,� (�+2) (�+2) (�+1) (�+2) of calls of GreedySwap on alg . Then alg is the inal valuealg of and the inal value of alg is �,0 (�+2) alg . We want to show that �,� �−1 (�+2) (�+2) (�+2) (�+2) (�+2) � � �(alg ) − �(alg ) = [�(alg ) − �(alg )] ≤ (� − 1)�(alg \ sol). �,� �,� �,0 �,�+1 �,0 �=0 (�+2) (�+2) We bound each �(alg ) − �(alg ) term using Lemma 5.10 and Lemma 5.11. Since in a backward phase �+1 � (�+2) (�+2) ′ ′ ′ we have � (alg ) > 4Γ � (sol), by Lemma 5.10 we know there exists a swap between� ∈ alg and � ∈ sol �,� such that (�+2) �(sol\ alg ) (1+ �)�(�) − �(�) 4(1+ �)Γ �,� ≤ √ √ · . ′ ′ (�+2) ′ ′ ′ � (�) − (1+ �)� (�) ( Γ − 1)( Γ − 1− �) � (alg \ sol) �,� By Lemma 5.11, we know that when � is set to the value in [�(�),(1+ �)· �(�)] in line 2Greed of ySwap, the ′ ′ ′ ′ ′ algorithm inds a path � between the endpoints of � such that � (� ) ≤ (1+ �)� (�) and �(� ) ≤ (1+ �)�(�). ′ ∗ ∗ Thus (�, � ) ∈ ����� and we get that the swap (� , � ) chosen by the ( �+ 1)th call toGreedySwap satisies: ∗ ∗ ′ �(� ) − �(� ) �(� ) − �(�) (1+ �)�(�) − �(�) ≤ ≤ ′ ∗ ′ ∗ ′ ′ ′ ′ � (� ) − � (� ) � (�) − � (�) � (�) − (1+ �)� (�) (�+2) �(sol\ alg ) 4(1+ �)Γ �,� ≤ √ √ · (�+2) ′ ′ ( Γ − 1)( Γ − 1− �) � (alg \ sol) �,� (�+2) �(alg \ sol) 4(1+ �)Γ �,� ≤ · . √ √ ′ ′ ′ � (sol) ( Γ − 1)( Γ − 1− �)(4Γ − 1)(4Γ) (�+2) (�+2) The last inequality is derived using the assumption �(alg \ sol) > 4Γ · �(sol\ alg ) in the statement � � (�+2) (�+2) ′ ′ ′ of the lemma, as well as the fact that for �all < �, � (alg ) ≥ 4Γ� (sol) =⇒ � (alg \ sol) ≥ �,� �,� ACM Trans. Algor. 34 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi (�+2) ′ ′ ′ ′ � (alg ) − � (sol) ≥ (4Γ − 1)� (sol). This in turn gives: �,� �−1 (�+2) (�+2) (�+2) (�+2) �(alg ) − �(alg ) = [�(alg ) − �(alg )] �,� �,0 �,�+1 �,� �=0 (�+2) (�+2) �−1 �(alg ) − �(alg ) �,�+1 �,� (�+2) (�+2) ′ ′ = · [� (alg ) − � (alg )] �,� �,�+1 (�+2) (�+2) ′ ′ � (alg ) − � (alg ) �=0 �,� �,�+1 �−1 4(1+ �)Γ (�+2) ≤ √ √ [�(alg \ sol)] �,� ′ ′ ′ ( Γ − 1)( Γ − 1− �)(4Γ − 1)(4Γ) �=0 (�+2) (�+2) ′ ′ � (alg ) − � (alg ) �,� �,�+1 � (sol) 4(1+ �)Γ √ √ ′ ′ ′ ( Γ − 1)( Γ − 1− �)(4Γ − 1)(4Γ − 1) �−1 (�+2) (�+2) · [�(alg \ sol) − �(sol\ alg )] �,� �,� �=0 (�+2) (�+2) ′ ′ � (alg ) − � (alg ) �,� �,�+1 � (sol) �−1 (�+2) (�+2) = � [�(alg \ sol) − �(sol\ alg )] (13) �,� �,� �=0 (�+2) (�+2) ′ ′ � (alg ) − � (alg ) �,� �,�+1 · . (14) � (sol) (�+2) (�+2) The last inequality is proved using the assumption �(alg \ sol) > 4Γ · �(sol\ alg ) in the statement of � � the lemma, which implies 4Γ 1 (�+2) (�+2) (�+2) �(alg \ sol) = �(alg \ sol) − �(alg \ sol) �,� �,� �,� 4Γ − 1 4Γ − 1 4Γ 4Γ (�+2) (�+2) < �(alg \ sol) − �(sol\ alg ). �,� �,� 4Γ − 1 4Γ − 1 It now suices to show (�+2) (�+2) ′ ′ �−1 � (alg ) − � (alg ) �,� �,�+1 (�+2) (�+2) [�(alg \ sol) − �(sol\ alg )] · ≤ �,� �,� ′ � (sol) �=0 � � � − 1 (�+2) �(alg \ sol). �,0 To do so, we view the series of swaps as occurring over a continuous timeline, wher �= 0e, 1for , . . . �− 1 the (�+2) (�+2) ′ ′ � (alg )−� (alg ) Í Í �,� �,�+1 ′ ′ ( �+ 1)th swap takes time�( �) = , i.e., occurs from time′ �( �) to time ′ �( �). The � <� � ≤ � � (sol) ′ ′ total time taken to perform all swaps in the sum is the total decr�ease acrin oss all swaps, divided�by(sol), ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 35 (�+2) (�+2) i.e., exactly� . Using this deinition of time Φ(�), let denote �(alg \ sol)− �(sol\ alg ) for the value of � �,� �,� Í Í ′ ′ satisfying Φ(�) ∈ [ ′ �( �), ′ �( �)). Using this deinition, we get: � <� � ≤ � (�+2) (�+2) �−1 ′ ′ →� ︁ � (alg ) − � (alg ) �,� �,�+1 (�+2) (�+2) [�(alg \ sol) − �(sol\ alg )] · = Φ(�) ��. �,� �,� � (sol) �=0 (�+2) � � We conclude by claiming Φ(�) ≤ � �(alg \ sol). Given this claim, we get: �,0 ∫ ∫ →� →� � � ′ � − 1 (�+2) � � (�+2) Φ(�) ��≤ �(alg \ sol) � �� = �(alg \ sol). �,0 �,0 0 0 Which completes the proof of the lemma. We now focus on proving the claim.Φ(Since �) is ixed in the Í Í Í ′ ′ ′ interval[ ′ �( �), ′ �( �)), it suices to prove the claim only�for which are equal to ′ �( �) for � <� � ≤ � � <� some �, so we proceed by induction on �. The claim clearly holds �for = 0 since ′ �( �) = 0 and Φ(0) = � <0 (�+2) (�+2) (�+2) �(alg \ sol) − �(sol\ alg ) ≤ �(alg \ sol). �,0 �,0 �,0 Í ′ ′ (�+2) ′ ′ ′ � � ′′ ′ Assume that for� = ′ �( �), we have Φ(�) ≤ � �(alg \ sol). For � = � + �( �), by induction we � <� �,0 ′′ � �( �) ′ can prove the claim by showing Φ(� ) ≤ � Φ(�). To show this, we consider the quantity (�+2) (�+2) (�+2) (�+2) ′′ ′ Φ(� ) − Φ(�) = [�(alg \ sol) − �(sol\ alg )] − [�(alg \ sol) − �(sol\ alg )] �,�+1 �,�+1 �,� �,� (�+2) (�+2) (�+2) (�+2) = [�(alg \ sol) − �(alg \ sol)] + [�(sol\ alg ) − �(sol\ alg )]. �,�+1 �,� �,� �,�+1 By Lemma 5.12 and reusing the bound in (14), we have: (�+2) (�+2) ′′ ′ Φ(� ) − Φ(�) = �(alg ) − �(alg ) �,�+1 �,� (�+2) (�+2) [�(alg \ sol) − �(sol\ alg )] �,� �,� (�+2) (�+2) ′ ′ ′ ≤ � [� (alg ) − � (alg )] �,� �,�+1 � (sol) ′ (�+2) (�+2) ′ ′ = � · [�(alg \ sol) − �(sol\ alg )] · �( �) = � · Φ(�) · �( �). �,� �,� Rearranging terms we have: ′′ ′ ′ � �( �) ′ Φ(� ) ≤ (1+ � · �( �)) Φ(�) ≤ � Φ(�), where we use the inequality 1+ � ≤ � . This completes the proof of the claim. □ 6 HARDNESS RESULTS FOR ROBUST PROBLEMS We give the following general hardness result for a family of problems that includes many graph optimization problems: Theorem 6.1. Let P be any robust covering problem whose input includes a weighted graph � where the lengths � of the edges are given as ranges[ℓ ,� ] and for which the non-robust version of the problem,P , has the following � � � properties: • A solution to an instance ofP can be written as a (multi-)set� of edges in� , and has cost � . �∈� � • Given an input including � to P , there is a polynomial-time approximation-preserving reduction from solving ′ ′ ′ ′ P on this input to solvingP on some input including � , where � is the graph formed by taking� , adding ∗ ∗ a new vertex � , and adding a single edge from� to some � ∈ � of weight 0. ACM Trans. Algor. 36 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi • For any input including � to P , given any spanning tre� e of � , there exists a feasible solution only including edges from � . Then, if there exists a polynomial time(�, �)-robust algorithm forP, there exists a polynomial-time �-approximation algorithm forP. Before proving Theorem 6.1, we note that robust traveling salesman and robust Steiner tree are examples of problems that Theorem 6.1 implicitly gives lower bounds for. For both problems, the irst property clearly holds. For traveling salesman, given any input � , any solution to the problem on input � as described in Theorem 6.1 ∗ ∗ can be turned into a solution of the same cost on input � by removing the new vertex� (since� was distance 0 from�, removing� does not afect the length of any tour), giving the second property. For any spanning tree of � , a walk on the spanning tree gives a valid TSP tour, giving the third property. For Steiner tree, for the input with graph � and the same terminal set, for any solution containing the edge (�, �) we can remove this edge and get a solution for the input with�graph that is feasible and of the same cost. Otherwise, the solution is already a solution for the input with � that graph is feasible and of the same cost, so the second property holds. Any spanning tree is a feasible Steiner tree, giving the third property. We now give the proof of Theorem 6.1. Proof of Theorem 6.1. Suppose there exists a polynomial time (�, �)-robust algorithm � forP. The �- approximation algorithmPfor is as follows: (1) From the input instanceI ofP where the graph is� , use the approximation-preserving reduction (that ′ ′ ′ must exist by the second property of the theorem) to construct instance I ofP where the graph is� . ′′ ′ ′ (2) Construct an instanceI ofP fromI as follows: For all edges�in , their length is ixed to their length ′ ∗ 6 inI . In addition, we add a “specialž edge fr�om to all vertices besides � with length range[0,∞] . ′′ ′ (3) Run � onI to get a solution sol. Treat this solution as a solutionI to (we will show it only uses edges that appear inI). Use the approximation-preserving reduction to conv sol ertinto a solution for I and output this solution. Let � denote the cost of the optimal solutionI to . Then, mr ≤ � . To see why, note that the optimal solution toI has cost � in all realizations of demands since it only uses edges of ixed cost, and thus its regret is at most � . This also implies that for d, opt all(d) is inite. Then fordall , sol(d) ≤ � · opt(d)+ �· mr, i.esol . (d) is inite in all realizations of demands, sol so does not include any special edges, as any solution with a special edge has ininite cost in some realization of demands. Now consider the realization of demands d where all special edges have length 0. The special edges and the ∗ ′ ′ edge (�, �) span � , so by the third property ofP in the theorem statement there is a solution using only 0 cost edges in this realization, opti.e(d). = 0. Then in this realization, sol(d) ≤ � · opt(d) + � · mr ≤ � · � . But since sol does not include any special edges, and all edges besides special edges have ixed cost and their cost is the ′′ ′ ′ ′ same inI as inI , sol(d) also is the cost of sol in instanceI , i.esol . (d) is a�-approximation for I . Since the reduction fromI toI is approximation-preserving, we also�get -appr a oximation for I. □ From [10, 15] we then get the following hardness results: Corollary 6.2. Finding an(�, �)-robust solution for Steiner tree where� < 96/95 is NP-hard. Corollary 6.3. Finding an(�, �)-robust solution for TSP where� < 121/120 is NP-hard. ∞ is used to simplify the proof, but can be replaced with a suiciently large inite number. For example, the total weight�of all edges in suices and has small bit complexity. ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 37 7 CONCLUSION In this paper, we designed constant approximation algorithms for the robust Steiner stt)trand ee (traveling salesman problemstsp ( ). More precisely, our algorithms take as input a range of possible edge lengths in a graph and obtain a single solution for the problem at hand that can be compared to the optimal solution for any realization of edge lengths in the given ranges. While our approximationtsp bounds are small for constants, that forstt are very large constants. A natural question is whether these constants can be made smaller, e.g., of the same scale as classic approximation bounds stt for . While we did not seek to optimize our constants, obtaining truly small constants for stt appears to be beyond our techniques, and is an interesting open question. More generally, robust algorithms are a key component in the area of optimization under uncertainty that is of much practical and theoretical signiicance. Indeed, as mentioned in our survey of related work, several diferent models of robust algorithms have been considered in the literature. Optimizing over input ranges is one of the most natural models in robust optimization, but has been restricted in the past to polynomial-time solvable problems because of deinitional limitations. We circumvent this by setting regret minimization as our goal, and creating (�, �)the -approximation framework, which then allows us to consider a large variety of interesting combinatorial optimization problems in this setting. We hope that our work will lead to more research in robust algorithms for other fundamental problems in combinatorial optimization, particularly in algorithmic graph theory. ACKNOWLEDGMENTS Arun Ganesh was supported in part by NSF Award CCF-1535989. Bruce M. Maggs was supported in part by NSF Award CCF-1535972. Debmalya Panigrahi was supported in part by NSF grants CCF-1535972, CCF-1955703, an NSF CAREER Award CCF-1750140, and the Indo-US Virtual Networked Joint Center on Algorithms under Uncertainty. REFERENCES [1] H. Aissi, C. Bazgan, and D. Vanderpooten. 2008. Complexity of the minśmax (regret) versions of min cut prDiscr oblems. ete Optimization 5, 1 (2008), 66 ś 73. [2] Hassene Aissi, Cristina Bazgan, and Daniel Vanderpooten. 2009. Minśmax and minśmax regret versions of combinatorial optimization problems: A survey.European Journal of Operational Research197, 2 (2009), 427 ś 438. https://doi.org/10.1016/j.ejor.2008.09.012 [3] Igor Averbakh. 2001. On the complexity of a class of combinatorial optimization problems withMathematical uncertainty. Programming 90, 2 (01 Apr 2001), 263ś272. [4] Igor Averbakh. 2005. The Minmax Relative Regret Median Problem on Networks. INFORMS Journal on Computing17, 4 (2005), 451ś461. [5] I. Averbakh and Oded Berman. 1997. Minimax regret p-center location on a network with demand uncertainty Location . Science5, 4 (1997), 247 ś 254. https://doi.org/10.1016/S0966-8349(98)00033-3 [6] Igor Averbakh and Oded Berman. 2000. Minmax Regret Median Location on a Network Under Uncertainty INFORMS . Journal on Computing12, 2 (2000), 104ś110. https://doi.org/10.1287/ijoc.12.2.104.11897 arXiv:https://doi.org/10.1287/ijoc.12.2.104.11897 [7] Dimitris Bertsimas and Melvyn Sim. 2003. Robust discrete optimization and netw Mathematical ork lows. Programming98, 1 (01 Sep 2003), 49ś71. https://doi.org/10.1007/s10107-003-0396-4 [8] Jaroslaw Byrka, Fabrizio Grandoni, Thomas Rothvoß, and Laura Sanità. 2010. An improved LP-based approximation for Steiner tree. In Proceedings of the 42nd ACM Symposium on Theory of Computing, STOC 2010, Cambridge, Massachusetts, USA, 5-8 June 2010 . 583ś592. [9] André Chassein and Marc Goerigk. 2015. On the recoverable robust traveling salesman problem. Optimization Letters 10 (09 2015). https://doi.org/10.1007/s11590-015-0949-5 [10] Miroslav Chlebík and Janka Chlebíková. 2002. Approximation Hardness of the Steiner Tree Problem on Graphs. Algorithm In Theory Ð SWAT 2002, Martti Penttonen and Erik Meineche Schmidt (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 170ś179. [11] Eduardo Conde. 2012. On a constant factor approximation for minmax regret problems using a symmetry point scenario European. Journal of Operational Research219, 2 (2012), 452 ś 457. https://doi.org/10.1016/j.ejor.2012.01.005 [12] Kedar Dhamdhere, Vineet Goyal, R. Ravi, and Mohit Singh. 2005. How to Pay, Come What May: Approximation Algorithms for Demand-Robust Covering Problems. In 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2005), 23-25 October 2005, Pittsburgh, PA, USA, Proceedings . 367ś378. ACM Trans. Algor. 38 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi [13] Martin Groß, Anupam Gupta, Amit Kumar, Jannik Matuschke, Daniel R. Schmidt, Melanie Schmidt, and José Verschae. 2018. A Local-Search Algorithm for Steiner Forest. 9thInInnovations in Theoretical Computer Science Conference, ITCS 2018, January 11-14, 2018, Cambridge, MA, USA. 31:1ś31:17. https://doi.org/10.4230/LIPIcs.ITCS.2018.31 [14] Masahiro Inuiguchi and Masatoshi Sakawa. 1995. Minimax regret solution to linear programming problems with an interval objective function.European Journal of Operational Research86, 3 (1995), 526 ś 536. https://doi.org/10.1016/0377-2217(94)00092-Q [15] Marek Karpinski, Michael Lampis, and Richard Schmied. 2013. New Inapproximability Bounds Algorithms for TSP. In and Computation , Leizhen Cai, Siu-Wing Cheng, and Tak-Wah Lam (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 568ś578. [16] Adam Kasperski and PawełZieliński. 2006. An Approximation Algorithm for Interval Data Minmax Regret Combinatorial Optimization Problems.Inf. Process. Lett. 97, 5 (March 2006), 177ś180. https://doi.org/10.1016/j.ipl.2005.11.001 [17] Adam Kasperski and Pawel Zieliński. 2007. On the existence of an FPTAS for minmax regret combinatorial optimization problems with interval data.Oper. Res. Lett. 35 (2007), 525ś532. [18] P. Kouvelis and G. Yu. 1996.Robust Discrete Optimization and Its Applications . Springer US. [19] Panos Kouvelis and Gang Yu. 1997. Robust 1-Median Location Problems: Dynamic Aspects and Uncertainty . Springer US, Boston, MA, 193ś240. https://doi.org/10.1007/978-1-4757-2620-6_6 [20] Helmut E. Mausser and Manuel Laguna. 1998. A new mixed integer formulation for the maximum regret prInternational oblem. Transactions in Operational Research 5, 5 (1998), 389 ś 403. https://doi.org/10.1016/S0969-6016(98)00023-9 [21] V. Vazirani. 2001. Approximation algorithms . Springer-Verlag, Berlin. [22] Jens Vygen. [n.d.]. New approximation algorithms for the TSP. [23] Laurence A. Wolsey. 1980. Heuristic analysis, linear programming and branch and bound. Combinatorial In Optimization , VII. J. Rayward-Smith (Ed.). Springer Berlin Heidelberg, Berlin, Heidelberg, 121ś134. https://doi.org/10.1007/BFb0120913 [24] H. Yaman, O. E. Karaşan, and M. Ç. Pinar. 2001. The robust spanning tree problem with intervalOp data. erations Research Letters29, 1 (2001), 31 ś 40. [25] P. Zieliński. 2004. The computational complexity of the relative robust shortest path problem with inter European val Journal data. of Operational Research158, 3 (2004), 570 ś 576. ACM Trans. Algor. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png ACM Transactions on Algorithms (TALG) Association for Computing Machinery

Robust Algorithms for TSP and Steiner Tree

Loading next page...
 
/lp/association-for-computing-machinery/robust-algorithms-for-tsp-and-steiner-tree-xxO7fZum9Y

References (45)

Publisher
Association for Computing Machinery
Copyright
Copyright © 2023 Copyright held by the owner/author(s).
ISSN
1549-6325
eISSN
1549-6333
DOI
10.1145/3570957
Publisher site
See Article on Publisher Site

Abstract

ARUN GANESH, UC Berkeley, USA BRUCE M. MAGGS, Duke University and Emerald Innovations, USA DEBMALYA PANIGRAHI, Duke University, USA Robust optimization is a widely studied area in operations research, where the algorithm takes as input a range of values and outputs a single solution that performs well for the entire range. Speciically, a robust algorithm aims regr toetminimize , deined as the maximum diference between the solution’s cost and that of an optimal solution in hindsight once the input has been realized. For graph problems P in , such as shortest path and minimum spanning tree, robust polynomial-time algorithms that obtain a constant approximation on regret are known. In this paper, we study robust algorithms for minimizing regret in NP-hard graph optimization problems, and give constant approximations on regret for the classical traveling salesman and Steiner tree problems. CCS Concepts: · Theory of computation→ Graph algorithms analysis. Additional Key Words and Phrases: Steiner tree, travelling salesman 1 INTRODUCTION In many graph optimization problems, the inputs are not known precisely and the algorithm is desired to perform well over a range of inputs. For instance, consider the following situations. Suppose we are planning the delivery route of a vehicle that must deliver goods � lo to cations. Due to varying traic conditions, the exact travel times between locations are not known precisely, but a range of possible travel times is available from historical data. Can we design a tour that is nearly optimal allfor travel times in the given ranges? Consider another situation where we are designing a telecommunication network to connect a set of locations. We are given cost estimates on connecting every two locations in the network but these estimates might be of due to unexpected construction problems. Can we design the network in a way that is nearly optimal all rfor ealized construction costs? These questions have led to the ieldrof obust graph algorithms. To deine a robust graph algorithm, we start with a “standardž optimization problem P deined by a set systemS ⊆ 2 over edges � with weights � . For example, ifP is the minimum spanning tree problem, S would be the set of all sets of edges comprising spanning trees. Given these inputs, the goal of the standard versionP is of to ind the set � ∈ S that minimizes � . �∈� In the robust version ofP, given a range of weights [ℓ ,� ] for every edge�, we want a solution that is good for � � all realizations of edge weights simultaneously. To quantify how good a solution is,regr weetdeine as the its maximum diference between the algorithm’s cost and the optimal cost for any vector of edge w d.eights In other We focus on minimization problems over sets of edges in this paper, but one can easily extend the deinition to maximization problems and problems over arbitrary set systems. Authors’ addresses: Arun Ganesh, arunganesh@berkeley.edu, UC Berkeley, Soda Hall, Berkeley, California, USA, 94709; Bruce M. Maggs, bmm@cs.duke.edu, Duke University and Emerald Innovations, 308 Research Drive, Durham, North Carolina, USA, 27710; Debmalya Panigrahi, debmalya@cs.duke.edu, Duke University, 308 Research Drive, Durham, North Carolina, USA, 27710. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for proit or commercial advantage and that copies bear this notice and the full citation on the irst page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). © 2022 Copyright held by the owner/author(s). 1549-6325/2022/12-ART https://doi.org/10.1145/3570957 ACM Trans. Algor. 2 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi words, the regret ofsol is: ︁ ︁ max(sol(d) − opt(d)), sol(d) := � , opt(d) := min � . � � d �∈S �∈sol �∈� Here, sol(d) (resp. opt(d)) denotes the cost ofsol (resp. the cost of the optimal solution) in instance d, and d ranges over all realizable inputs, i.e., inputs such ℓ ≤that � ≤ � for all �. We emphasize thatsol is a ixed � � � solution (independentdof ) whereas the solution determining opt(d) is dependent on the input d. Now, the goal of the robust versionP ofis to ind a solution that minimizes regret. The solution that achieves this minimum is calleminimum d the regret solution (mrs), and its regret is the minimum regret(mr). In many cases, however, minimizing regret turns out to NPb-har e d, in which case one seeks an approximation guarantee. Namely, a�-approximation algorithm satisies, for all input realizations d, sol(d) − opt(d) ≤ � · mr, i.e., sol(d) ≤ opt(d) + � · mr. To the best of our knowledge, all previous work in polynomial-time algorithms for minimizing regret in robust graph optimization focused on problems P. In in this paper, we study robust graph algorithms for minimizing regret inNP-hard optimization problems. In particular, we study robust algorithms for the classical traveling salesman (tsp) and Steiner treestt ( ) problems, that model e.g. the two scenarios described at the beginning of the paper. As a consequence of theNP-hardness, we cannot hope to show guarantees of the form:sol(d) ≤ opt(d)+ �· mr, since for ℓ = � (i.e.,mr = 0), this would imply an exact algorithm for NP-har an d optimization problem. Instead, � � we give guarantees:sol(d) ≤ �·opt(d)+�·mr, where � is (necessarily) at least as large as the best approximation guarantee for the optimization problem. We call such an algorithm (�, �)-r anobust algorithm. If both � and � are constants, we call it a constant-approximation to the robust problem. In this paper, our main results are constant approximation algorithms for the robust traveling salesman and Steiner tree problems. We hope that our work will lead to further research in the ield of robust approximation algorithms, particularly for minimizing regret in other NP-hard optimization problems in graph algorithms as well as in other domains. 1.1 Related Work Robust graph algorithms have been extensively studied in the operations research community. It is known that minimizing regrNP et is -hard for shortest path [25] and minimum cut 1][problems, and using a general theorem for converting exact algorithms to robust ones, 2-approximations are known for these problems 11, [16]. In some cases, better results are known for special classes of graphs, e17 .g.,].[Robust minimum spanning trmst ee () has also been studied, although in the context of making exponential-time exact algorithms more24practical ]. [ Moreover, robust optimization has been extensively researched for other (non-graph) problem domains in the operations research community, and has led to results in clustering 4ś 6, 19[], linear programming 14,[20], and other areas [3, 16]. More details can be found in the book by Kouvelis and 18Y]uand [ the survey by Aissi et al. [2]. Other robust variants of graph optimization where one does not know the edge costs ahead of time have also been studied in the literature. Inrobust the combinatorial optimization model proposed by Bertsimas and Sim 7],[ edge costs are given as ranges as in this paper, but instead of optimizing for all realizations of costs within the ranges, the authors consider a model where at most � edge costs can be set to their maximum value and the remaining are set to their minimum value. The objective is to minimize the maximum cost over all realizations. In this setting, there is no notion of regret and an approximation algorithm for the standard problem translates to an approximation algorithm for the robust problem with the same approximation factor. In the data-robust model [12], the input includes a polynomial number of explicitly deined “scenariosž for edge costs, with the goal of inding a solution that is approximately optimal for all given scenarios. That is, in the (1) (2) (�) input one receives a graph and a polynomial number of scenarios d , d . . . d and the goal is to ind alg (�) whose maximum cost across all scenarios is at most some approximation factor mintimes max � . sol �∈[�] �∈sol � ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 3 In contrast, in this paper, we have exponentially many scenarios and look at the maximum alg(d)of− opt(d) rather than alg(d). A variation of this is reco the verable robust model [9], where after seeing the chosen scenario, the algorithm is allowed to “recoverž by making a small set of changes to its original solution. 1.2 Problem Definition and Results We irst deine the Steiner trestt e ( ) and traveling salesman problems tsp(). In both problems, the input is an undirected graph� = (� , �) with non-negative edge costs. In Steiner tree, we are also given a subset of vertices calledterminalsand the goal is to obtain a minimum cost connected subgraph � that of spans all the terminals. In traveling salesman, the goal is to obtain a minimum cost tour that visits ever�y v.erte In the x in robust versions of these problems, the edge costs are ranges[ℓ ,� ] from which any cost may realize. � � Our main results are the following: Theorem 1.1. (Robust Approximations.) There exist constant approximation algorithms for the robust traveling salesman and Steiner tree problems. Remark: The constants we are able to obtain for the two problems are very difer(ent: 4.5, 3.75) fortsp (in Section 3) and(2755, 64) forstt (in Section 5). While we did not attempt to optimize the precise constants, obtaining small constantsstt forcomparable to thetsp result requires new ideas beyond our work and is an interesting open problem. We complement our algorithmic results with lower bounds. Note ℓthat = �if , we have mr = 0 and thus an � � (�, �)-robust algorithm gives�an -approximation for precise inputs. So, hardness of approximation results yield corresponding lower bounds on �. More interestingly, we show that hardness of approximation results also yield lower bounds on the value of � (see Section 6 for details): Theorem 1.2. (APX-hardness.) A hardness of approximation of� for tsp (resp., stt) under P ≠ NP implies that it isNP-hard to obtain� ≤ � (irrespective of�) and � ≤ � (irrespective of�) for robust tsp (resp., robust stt). 1.3 Our Techniques We now give a sketch of our techniques. Before doing so, we note that for problems P with in linear objectives, ℓ +� � � it is known that running an exact algorithm using weights gives a(1, 2)-robust solution11[, 16]. One might hope that a similar result can be obtaine NP d for -hard problems by replacing the exact algorithm with an approximation algorithm in the above framework. Unfortunately, we show in Section 3 that this is not true in ℓ +� � � general. In particular, we give a robust tsp instance where using a 2-approximation tsp for with weights gives a solution thatnot is(�, �)-robust forany � = �(�), � = �(�). More generally, a black-box approximation run on a ixed realization could output a solution including edges that have small weight opt relativ for that e to realization (so including these edges does not violate the approximation guarantee), but these edges could have large weight relativemr toand opt in other realizations, ruining the robustness guarantee. This establishes a qualitative diference between robust approximations for problems P consider in ed earlier and NP-hard problems being considered in this paper, and demonstrates the need to develop new techniques for the latter class of problems. LP relaxation. We denote the input graph� = (� , �). For each edge � ∈ �, the input is a range[ℓ ,� ] where the � � actual edge weight � can realize to any value in this range. The robust version of a graph optimization problem There are two common and equivalent assumptions made intsp theliterature in order to achieve reasonable approximations. In the irst assumption, the algorithms can visit vertices multiple times in the tour, while in the latter, the edges satisfy the metric property. We use the former in this paper. ACM Trans. Algor. 4 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi P then has the LP relaxation min{� : x ∈ S; � � ≤ opt(d) + �,∀d}, � � �∈� where � is the standard polytope forP, and opt(d) denotes the cost of an optimal solution when the edge weights ared = {� : � ∈ �}. That is, this is the standard LP for the problem, but with the additional constraint that the fractional solution x must have regret at most � for any realization of edge weights. We call the additional constraints theregret constraint set. Note that settingx to be the indicator vector of mrs and � to mr gives a feasible solution to the LP; thus, the LP optimum is at mrmost , i.e., the optimal solution to the LP gives a lower bound for the regret minimization problem. Solving the LP. We assume that the constraints in � are separable in polynomial time (e.g., this is true for most standard optimization problems including stt and tsp). So, designing the separation oracle comes down to separating the regret constraint set, which requires checking that: " # max � � − opt(d) = � � �∈� " # " # ︁ ︁ max max � � − sol(d) = max max � � − sol(d) ≤ � . � � � � sol sol d d �∈� �∈� Thus, given a fractional solution x, we need to ind an integer solution sol and a weight vectord that maximizes the regret ofx given by � � − sol(d). Once sol is ixed, inding d that maximizes the regret is simple: If �∈� � � sol does not include an edge �, then to maximize � � − sol(d), we set � = � ; else if sol includes �, we �∈� � � � � set � = ℓ . Note that in these two cases, edge� contributes� � and ℓ � − ℓ respectively to the regret. The � � � � � � � above maximization thus becomes: " # ︁ ︁ ︁ ︁ max � � + (ℓ � − ℓ ) = � � − min (� � − ℓ � + ℓ ). (1) � � � � � � � � � � � � sol sol �∈sol �∈sol �∉sol �∈� Thus, sol is exactly the optimal solution with edge w�eights := � � − ℓ � + ℓ . (For reference, we deine the � � � � � � derivedinstance of a robust graph problem as the instance with edge weights � .) Note that these weights are non-negative as� > ℓ and � ≥ 0. � � � Now, if we were solving a problem P,in we would simply need to solve the problem on the derived instance. Indeed, we will show later that this yields an alternative technique for obtaining robust algorithms for problems inP, and recover existing results in 16].[However, we cannot hope to ind an optimal solution toNP an-hard problem. Our irst compromise is that we settle for appr anoximateseparation oracle. More precisely, our goal ′ ′ ′ ′ is to show that there exists some ixed constants � , � ≥ 1 such that if � � > � · opt(d) + � · � for some � � ′ ′ d, then we can indsol, d such that � � > sol(d ) + �. Since the LP optimum is at most mr, we can then � � ′ ′ obtain an(� , � )-robust fractionalsolution using the standard ellipsoid algorithm. For tsp, we show that the above guarantee can be achieved by the classic 2-approximation based on the mst solution of the derived instance. The details appear in Section 3. stt Although also admits2a-approximation based on the mst solution, this turns out to be insuicient for the above guarantee. Instead, we use a diferent approach here. We note that the regret of the fractional solution against any ixed solution sol (i.e., the argument over which Eq. (1) maximizes) can be expressed as the following diference: ︁ ︁ ︁ ︁ (� � − ℓ � + ℓ ) − (ℓ − ℓ � ) = � − � , where � := ℓ − ℓ � . � � � � � � � � � � � � � � �∉sol �∈� �∉sol �∈� The irst term is the weight of edges in the derived instance that not in arsol e . The second term corresponds to a new stt instance with diferent edge weights � . It turns out that the overall problem now reduces to showing ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 5 the following approximation guarantees on thesestt twinstances o �( and � are constants): 1 2 ︁ ︁ ︁ ︁ (i) � ≤ � · � and (ii) � ≤ � · � . � 1 � � 2 � �∈alg �∈sol �∈alg\sol �∈sol\alg Note that guarantee (i) on the derived instance is an unusual “diference approximationž that is stronger than usual approximation guarantees. Moreover, we need these approximation bounds simultane to ouslyhold, i.e., hold for the samealg. Obtaining these dual approximation bounds simultaneously forms the most technically challenging part of our work, and is given in Section 5. Rounding the fractional solution. After applying our approximate separation oracles, we have a fractional ′ ′ solution x such that for all edge weights d, we have � � ≤ � · opt(d) + � · mr. Suppose that, ignoring the � � � regret constraint set, the LP we are using has integrality gap at�most for precise inputs. Then we can bound the diference between the cost ofmrs and �x in every realization: Since the integrality gap is �, w atemost have �· � � ≥ opt(d) for anyd. This implies that: �∈� � � mrs(d) − �· � � ≤ mrs(d) − opt(d) ≤ mr. � � �∈� Hence, the regret ofmrs with respect to�� is at mostmr. Then a natural rounding approach is to try to match this property ofmrs, i.e., search for an integer solution alg that does not cost much more than �x in any realization. Suppose we choose alg that satisies: " # alg = argmin max sol(d) − � � � . (2) � � sol �∈� ′ ′ Sincealg has minimum regret with respect�xto, alg’s regret is also at most mr. Note that �x is a(�� , �� )- ′ ′ robust solution. Hencealg , is a(�� , �� + 1)-robust solution. If we are solving a problemPin , the alg that satisies Eq. (2) is the optimal solution with wmax eights {ℓ ,� − � � (� − ℓ )�� } and thus can be found in polynomial time. So, using an integral LP formulation (i.e., integrality � � � gap of 1), we get a(1, 2)-robust algorithm overall for these problems. This exactly matches the results 16], in [ although we are using a diferent set of techniques . Of course, forNP-hard problems, inding a solution alg that satisies Eq. (2) isNP-hard as well. It turns out, however, that we can design a generic rounding algorithm that gives the following guarantee: Theorem 1.3. There exists a rounding algorithm that takes as input an(�, �)-robust fractional solution tostt (resp. tsp) and outputs a (���,���+�)-robust integral solution, wher� e and � are respectively the best approximation factor and integrality gap for (classical) stt (resp., tsp). We remark that while we stated this rounding theoremstt forand tsp here, we actually give a more general version (Theorem 2.1) in Section 2 that applies to a broader class of covering problems including set cover, survivable network design, etc. and might be useful in future research in this domain. 1.4 Roadmap We present the general rounding algorithm for robust problems in Section 2. In Section 3, we use this rounding algorithm to give a robust algorithm for the Traveling Salesman problem. Section 4 gives a local search algorithm for the Steiner Tree problem. Both the local search algorithm and the rounding algorithm from Section 2 are 3 ′ They obtain(1, 2)-robust algorithms by choosing alg as the optimal solution with edge weights ℓ +� . For any d, considerd with weights � � ′ ′ ′ ′ ′ � = � +ℓ −� . By optimalityalg of , alg(d)+alg(d ) ≤ mrs(d)+mrs(d ). Rearranging, we getalg(d)−mrs(d) ≤ mrs(d )−alg(d ) ≤ � � � mr, so alg’s cost exceeds mrs’ regret by at most mr in every realization. ACM Trans. Algor. 6 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi then used to give a robust algorithm for the Steiner Tree problem in Section 5. The hardness results for robust problems appear in Section 6. Finally, we conclude with some interesting directions of future work in Section 7. 2 A GENERAL ROUNDING ALGORITHM FOR ROBUST PROBLEMS In this section we give the rounding algorithm of Theorem 1.3, which is a corollary of the following, more general theorem: Theorem 2.1. Let P be an optimization problem deined on a set systemS ⊆ 2 that seeks to ind the set � ∈ S that minimizes � , i.e., the sum of the weights of elements in �. In the robust version of this optimization �∈� problem, we have � ∈ [ℓ ,� ] for all � ∈ �. � � � Consider an LP formulation ofP (called P-LP) given by:{min � � : x ∈ �, x ∈ [0, 1] }, where � is a � � �∈� polytope containing the indicator vector � of all � ∈ S and not containing� for any � ∉ S. The corresponding � � LP formulation for the robust version (calledP -LP) is given by:{min� : x ∈ �, x ∈ [0, 1] , � � ≤ robust � � �∈� opt(d) + �∀d}. Now, suppose we have the following properties: • There is a�-approximation algorithm forP. • The integrality gap ofP-LP is at most�. ∗ ∗ • There is a feasible solution x to P-LP that satisies:∀d : � � ≤ � · opt(d) + � · mr. �∈� � Then, there exists an algorithm that outputs a(���,���+ �)-robust sol for P. Proof. The algorithm is as follows: Construct an instance P which of uses the same set systemS and where ∗ ∗ ∗ element� has weightmax{� (1− �� ), ℓ (1− �� )}+ �ℓ � . Then, use the �-approximation algorithmPfor on � � � � � � this instance to ind an integral solution �, and output it. Given a feasible solution � to P, note that: ︁ ︁ ︁ ︁ ∗ ∗ ∗ ∗ max[ � − � � � ] = max{� (1− �� ), ℓ (1− �� )}− �ℓ � � � � � � � � � � �∈� �∈� �∈� �∉� ︁ ︁ ∗ ∗ ∗ ∗ = [max{� (1− �� ), ℓ (1− �� )}+ �ℓ � ] − �ℓ � . � � � � � � � � �∈� �∈� Now, note that since� was output by a �-approximation algorithm, for any feasible solution � : ︁ ︁ ∗ ∗ ∗ ∗ ∗ ∗ [max{� (1− �� ), ℓ (1− �� )}+ �ℓ � ] ≤ � [max{� (1− �� ), ℓ (1− �� )}+ �ℓ � ] =⇒ � � � � � � � � � � � � �∈� �∈� ︁ ︁ ∗ ∗ ∗ ∗ [max{� (1− �� ), ℓ (1− �� )}+ �ℓ � ] − � �ℓ � � � � � � � � � �∈� �∈� ︁ ︁ ∗ ∗ ∗ ∗ ≤�[ [max{� (1− �� ), ℓ (1− �� )}+ �ℓ � ] − �ℓ � ] � � � � � � � � �∈� �∈� ︁ ︁ =� max[ � − � � � ]. � � �∈� �∈� SinceP-LP has integrality gap �, for any fractional solution x,∀d : opt(d) ≤ � � � . Fixing � to be the �∈� � � set of elements used in the minimum regret solution then gives: ︁ ︁ max[ � − � � � ] ≤ max[mrs(d) − opt(d)] = mr. � � d d �∈� �∈� ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 7 Combined with the previous inequality, this gives: ︁ ︁ ∗ ∗ ∗ ∗ [max{� (1− �� ), ℓ (1− �� )}+ �ℓ � ] − � �ℓ � ≤ �mr � � � � � � � � �∈� �∈� ︁ ︁ ︁ ∗ ∗ ∗ ∗ ∗ =⇒ [max{� (1− �� ), ℓ (1− �� )}+ �ℓ � ] − �ℓ � ≤ �mr+ (� − 1) �ℓ � � � � � � � � � � � �∈� �∈� �∈� ︁ ︁ ︁ ∗ ∗ =⇒ max[ � − � � � ] ≤ �mr+ (� − 1) �ℓ � . � � � � � �∈� �∈� �∈� This implies: ︁ ︁ ︁ ∗ ∗ ∀d : sol(d) = � ≤ � � � + �mr+ (� − 1) �ℓ � � � � � � �∈� �∈� �∈� ︁ ︁ ∗ ∗ ≤ � � � + �mr+ (� − 1) �� � � � � � �∈� �∈� = �� � � + �mr �∈� ≤ ��[�opt(d) + �mr] + �mr = ���· opt(d) + (���+ �) · mr. i.e.,sol is(���,���+ �)-robust as desired. □ 3 ALGORITHM FOR THE ROBUST TRAVELING SALESMAN PROBLEM In this section, we give a robust algorithm for the traveling salesman problem: Theorem 3.1. There exists a(4.5, 3.75)-robust algorithm for the traveling salesman problem. Recall that we consider the version of the problem where we are allowed to use edges multipletsp times . We in recall that anytsp tour must contain a spanning tree, and an Eulerian walk on a double mst dis a 2-approximation algorithm for tsp (known as the “double-tree algorithmž). One might hope that since we hav(1e,a2)-robust algorithm for robust mst, one could take its output and apply the double-tree algorithm to get (2, 4a)-robust solution to robust TSP. Unfortunately, we show in Section 3.1 that this algorithm (�, is �)-rnot obust for any � = �(�), � = �(�). Nevertheless, we are able to leverage the connectionmst to to arrive at a(4.5, 3.75)-robust algorithm for tsp, given in Section 3.3. 3.1 Failure of Double-Tree Algorithm The black-box reduction of 16[] for turning exact algorithms into (1, 2)-robust algorithms simply uses the exact ℓ +� � � algorithm to ind the optimal solution when � arall e set to and outputs this solution (se16 e ][ for details on its analysis). We give an example of a robust tsp instance where applying the double-tree algorithm to the (1, 2)-robust mst generated by this algorithm does not give a robust tsp solution. Since the doubling of ℓ +� � � thismst is a 2-approximationtsp forwhen all� are set to , this example will also show that using an approximation algorithm instead of an exact algorithm in the black-box reduction fails to give any reasonable robustness guarantee as stated in Section 1. ′ ′ Consider an instance of robust tsp with vertices � = {� , � . . . � }, where there is a “type-1ž edge from� to � 1 � � with length 1− � for some� > , and where there is a “type-2ž edge from� to � for all �, as well as from � �+1 2(�−1) � to � , with length in the range [0, 2− ]. � 1 �−1 ACM Trans. Algor. 8 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi Minimize � subject to ∀∅ ≠ � ⊂ � : � ≥ 2 �∈�,�∈� \� �� ∀� ∈ � : � = 2 �≠� �� ∀∅ ≠ � ⊂ � ,� ∈ �, �∈ �\� : � ≥ � �∈�(�) �,�,� �� ∀d : � � ≤ opt(d) + � (3) �∈� � � ∀�, � ∈ � ,� ≠ � : 0 ≤ � ≤ 1 �� ∀� ∈ �,�, � ∈ � , � ≠ � : 0 ≤ � ≤ 1 �,�,� ∀� ∈ � : � ≤ 2 Fig. 1. The Robust TSP Polytope ′ 4 Considermrs, which uses� − 1 type-2 edges and two type-1 edges to connect � to the rest of the tour. Its regret is maximized in the realization where all the type-2 edges it is using 2− have length and the type-2 �−1 edge it is not using has length 0. Note that if a solution contains a type-2 edge2of− length , we can replace it �−1 with the two type-1 edges it is adjacent to and the cost of the solution decreases since�w>e set . In turn, 2(�−1) the optimum solution for this realization uses the type 2-edge with length 0, the two type-1 edges adjacent to this type-2 edge once, and then the other �−2 type-1 edges twice. Somrs has cost (�−1)(2− )+2(1−�) ≤ 2(�−1) �−1 whereas opt has cost 2(� − 1)(1− �). Then, the regret of this solution is at �� most . ℓ +� � � 1 When all edge costs are set to , since� > the minimum spanning tree of the graph is a star centered 2 2(�−1) at � , i.e., all the length 1− � edges. So the (1, 2)-approximation algorithm outputs this tremst e for . Doubling this tree gives a solution to the robust tsp instance that costs2�(1− �) in all realizations of demands. Consider the realization d where all type-2 edges have length 0. mrs costs 2−2� and is also the optimal solution. If the double-tree solution(�is , �)-robust we get that: 2�(1− �) ≤ � · opt(d) + � · mr ≤ � · (2− 2�) + ���. Setting� to e.g. 1/� gives that one of �, � isΩ(�). 3.2 LP Relaxation We use the LP relaxation of robust traveling salesman in Figure 1. This is the standard subtour LP22(se ]),e e.g. [ but augmented with variables specifying the edges used to visit each new vertex, as well as with the regret constraint set. Integrally � , is 1 if splitting the tour into subpaths at each point where a vertex is visited for the �� irst time, there is a subpath fr�om to � (or vice-versa). That is, � is 1 if between the irst time � is visited and �� the irst time � is visited, the tour only goes through vertices that were already visited befor�e.visiting � is 1 �,�,� if on this subpath, the edge � is used. We use� to denote � for brevity. We discuss in this subsection � �,�,� �,�∈� why the constraints other than the regret constraint set (3) inare identical to the standar tsp d polytope. This discussion may be skipped without afecting the readability of the rest of the paper. The standard LP fortsp is thesubtour LP (see e.g. [22]), which is as follows: min � � �� �� (�,�)∈� s.t. ∀∅ ≠ � ⊂ � : � ≥ 2 �� (�,�)∈�(�) (4) ∀� ∈ � : � = 2 �� (�,�)∈� ∀(�, �) ∈ � : 0 ≤ � ≤ 1 �� We do not prove this is mrs: Even if it is not, it suices to upper bmr ound by this solution’s regret. ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 9 where �(�) denotes the set of edges with one endpoint�in . Note that because the graph is undirected, the order of� and � in terms such as(�, �), � , and � is immaterial, e.g., there is no distinction betw(e�en , �)eand dge �� �� edge (�,�) and � and � denote the same variable. This LP is written for the problem formulation where the �� �� triangle inequality holds, and thus we only need to consider tours that are cycles that visit every vertex exactly once. We are concerned, however, with the formulation where the triangle inequality does not necessarily hold, but tours can revisit vertices and edges multiple times. To modify the subtour LP to account for this formulation, we instead let � be an indicator variable for whether our solution conne � tocts � using some path in the graph. �� Using this deinition � for , the subtour LP constraints then tell us that we must buy a set of paths such that a �� set of edges directly connecting the endpoints of the paths would form a cycle visiting every vertex exactly once. Then, we introduce variables � denoting that we are using the edge � on the path from� to �. For ease of �,�,� notation, we let � = � denote the number of times a fractional solution uses the� ein dgepaths. We � �,�,� �,�∈� can then use standard constraints from the canonical shortest path LP to ensure that in an integer solution � is �� set to 1 only if for some path fr�om to �, all edges� on the path have � set to 1. �,�,� Lastly, note that the optimal tour does not use an edge more than twice. Suppose a tour uses the e� dge = (�, �) thrice. By ixing a start/end vertex for the tour, we can split the tour�,into � , �, � , �, � where � is the part of 1 2 3 1 the tour between the irst and second use of�, � is the part of the tour between the second and third use�of , and � is the part of the tour after the third use�.of Because the tour starts and ends at the same vertex (� or �), and each of the three uses of edge� goes from� to � or vice versa, the number of � , � , and � that go from� 1 2 3 to � or vice-versa (as opposed to going from � to � or � to �) must be odd, and hence not zero. Without loss of generality, we can assume � goes from� to �. Then, the tour � , � , �, � , where � denotes the reversal of� , is a 1 1 2 3 1 1 valid tour and costs strictly less than the original tour. So any tour using an edge more than twice is not optimal. This lets us add the constraint � ≤ 2 to the LP without afecting the optimal solution. This gives the formulation tsp for without triangle inequality but with repeated edges allowed: min � � �∈� � � s.t. ∀∅ ≠ � ⊂ � : � ≥ 2 �∈�,�∈� \� �� ∀� ∈ � : � = 2 �≠� �� ∀∅ ≠ � ⊂ � ,� ∈ �, �∈ �\� : � ≥ � (5) �∈�(�) �,�,� �� ∀�, � ∈ � , � ≠ � : 0 ≤ � ≤ 1 �� ∀� ∈ �,�, � ∈ � , � ≠ � : 0 ≤ � ≤ 1 �,�,� ∀� ∈ � : � ≤ 2 By integrality of the shortest path polytope, if �we let denote the length of the shortest path from � to �, �� Í Í then � � ≥ � � . In particular, if we ix the value � of the optimal setting�of values � �,�,� �� �� �� �,�,� �∈�,�,� ∈� �,�∈� is to set� to � for every� on the shortest path from� to �. So (5) without the triangle inequality assumption �,�,� �� is equivalent(4) to with the triangle inequality assumption. In particular, the integrality (5) is gap the same of as the integrality gap(4) of, which is known to be at most 3/2 23[]. Then, adding a variable � for the fractional solution’s regret and the regret constraint set gives (3). 3.3 Approximate Separation Oracle We now describe the separation oracle RRTSP-Oracle used to separate (3). All constraints except the regret constraint set can be separated in polynomial time by solving a min-cut problem. Recall that exactly separating the regret constraint set involves inding an “adversar solyžthat maximizes max [ � � − sol(d)], and d �∈� � � seeing if this quantity exce �. eHo dswever, since TSP is NP-hard, inding this solution in general NP-har isd. Instead, we will only consider a solution sol if it is a walk on some spanning � ,trand ee ind the one that maximizes max [ � � − sol(d)]. d �∈� � � ACM Trans. Algor. 10 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi Algorithm 1 RRTSP-Oracle(� (� , �), {(ℓ ,� )} , (x, y, �)) � � �∈� Input:Undirected graph � (� , �), lower and upper bounds on edge lengths{(ℓ ,� )} , solutionx( = � � �∈� {� } , y = {� } , �) to (3) �,�,� �∈�,�,� ∈� �� �,�∈� 1: Check all constraints of (3), return any violated constraint that is found 2: � ← copy of� where � has weight� � − (ℓ � − 2ℓ ) � � � � � ′ ′ 3: � ← minimum spanning tree� of Í Í 4: if ′(ℓ � − 2ℓ ) + ′ � � > � then � � � � � �∈� �∉� Í Í 5: return (ℓ � − 2ℓ ) + � � ≤ � ′ ′ � � � � � �∈� �∉� 6: else 7: return “Feasiblež 8: end if Fig. 2. Separation Oracle for (3). Fix anysol that is a walk on some spanning tr�e.eFor any �, if � is not in � , the regret ofx, y againstsol is maximized by setting �’s length to� . If� is in � , then sol is paying 2� for that edge whereas the fractional � � solution pays � � ≤ 2� , so to maximize the fractional solution’s�regr should et, be set toℓ . This gives that � � � � � the regret of fractional solution x against anysol that is a spanning tree walk�onis ︁ ︁ ︁ ︁ (ℓ � − 2ℓ ) + � � = � � − (� � − (ℓ � − 2ℓ )). � � � � � � � � � � � � �∈� �∉� �∈� �∈� The quantity � � is ixed with respect�to , so inding the spanning tr�eethat maximizes this quantity � � �∈� is equivalent to inding � that minimizes (� � − (ℓ � − 2ℓ )). But this is just an instance of the minimum � � � � � �∈� spanning tree problem where edge � has weight� � − (ℓ � − 2ℓ ), and thus we can ind� in polynomial time. � � � � � This gives the following lemma: Lemma 3.2. For any instance of robust traveling salesman there exists an algorithm RRTSP-Oracle that given a solution(x, y, �) to (3) either: • Outputs a separating hyperplane for(3), or • Outputs łFeasible,ž in which case(x, y) is feasible for the (non-robust) TSP LP and∀d : � � ≤ 2 · � � �∈� opt(d) + �. Proof of Lemma 3.2. RRTSP-Oracle is given in Figure 2. All inequalities except the regret constraint set can ′ ′ be checked exactly byRRTSP-Oracle. Consider the tree� computed inRRTSP-Oracle and d with� = ℓ for ′ ′ ′ � ∈ � and � = � for� ∉ � . The only other violated inequality RRTSP-Oracle can output is the inequality Í Í Í Í Í ′ ′ ′ ′(ℓ � − 2ℓ ) + ′ � � ≤ � in line 5, which is equivalent to � � ≤ 2 � + �. Since2 � is � � � � � � �∈� �∉� �∈� � �∈� � �∈� � the cost of a tour in realization d (the tour that follows a DFS on the spanning tr�e),e this inequality is implied by the inequality � � ≤ opt(d ) + � from the regret constraint set. FurthermoreRRTSP , -Oracle only �∈� � outputs this inequality when it is actually violated. So, it suices to show that if there exists d such that � � > 2opt(d) + � then RRTSP-Oracle outputs � � �∈� a violated inequality on line 5.opt Since (d) visits all vertices, it contains some spanning � , such treethat opt(d) ≥ � . Combining these inequalities gives �∈� ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 11 ︁ ︁ � � > 2 � + � . � � � �∈� �∈� Í Í Since all� are at most 1, setting� = ℓ for� ∈ � and � = � otherwise can only increase � � −2 � , � � � � � � � � �∈� �∈� so ︁ ︁ ︁ ︁ ︁ ℓ � + � � > 2 ℓ + � =⇒ � � − (� � − (ℓ � − 2ℓ )) > � . � � � � � � � � � � � � �∈� �∉� �∈� �∈� �∈� ′ ′ ′ Then RRTSP-Oracle inds a minimum spanning�treon e � , i.e�. such that ︁ ︁ (� � − (ℓ � − 2ℓ )) ≤ (� � − (ℓ � − 2ℓ )). � � � � � � � � � � �∈� �∈� which combined with the previous inequality gives ︁ ︁ ︁ ︁ � � − (� � − (ℓ � − 2ℓ )) > � =⇒ (ℓ � − 2ℓ ) + � � > � . � � � � � � � � � � � � ′ ′ ′ �∈� �∈� �∈� �∉� By using the ellipsoid method with separationRRTSP oracle -Oracle and the fact that(3) has optimum at most mr, we get a (2, 1)-robust fractional solution. Applying Theorem 1.3 as well as the fact that the TSP polytope has integrality gap 3/2 (see e.g. [22]) and the TSP problem has a3/2-approximation gives Theorem 3.1. 4 A LOCAL SEARCH ALGORITHM FOR STEINER TREE In this section, we describe a local search algorithm for the Steiner tree, giv 13]. By en in a simpliie [ d versions of the analysis appearing 13 in ], w [ e show that the algorithm is 4-approximate. As with many local search algorithms, this algorithm could run in superpolynomial time in the worst case. Standard tricks can be used to modify this algorithm into a polynomial (4+ time �)-approximation. This algorithm will serve as a primitive in the algorithms we design in Section 5. The local moves considered by the algorithm arpath e allswaps, deined follows: If the current Steiner tree is � , the algorithm can pick any two vertices �, � in� such that there exists a path from � to � where all vertices except � and � on the path are not in� (and thus all edges on the path are not in � ). The algorithm may take any path � from� to � of this form. It suices to consider only the shortest path of this form. The � is pathadded to � , inducing a cycle. The algorithm then picks a subpath � in the cycle and removes it fr�om , while maintaining that � is a feasible Steiner tree. It suices to consider only maximal subpaths. These are just the subpaths formed by splitting the cycle at every vertex with degree at least � 3∪in {�}. Let � denote the number of nodes and� the number of edges in the graph. Since there are at most pairs of vertices �, �, and a shortest path � between � and � can be found in � (� + � log�) time, and all maximal subpaths in the induced cy � cle ∪{�in} can be found in � (�) time, if there is a move that improves the cost � , wof e can ind it in polynomial time. We will use the following lemma to show the approximation ratio. Lemma 4.1. For any tree the fraction of vertices with degree at most 2 is strictly greater than 1/2. Proof. This follows from a Markov bound on the random variable � deined as the degree of a uniformly 2�−2 random vertex minus one. A tree with � vertices has average degree < 2, so E[�] < 1. In turn, the fraction of vertices with degree 3 or greater is[� Pr≥ 2] < 1/2. □ Theorem 4.2. Let � be a solution to an instance of Steiner tree such that no path-swap reduces the cost of � . Then � is a 4-approximation. ACM Trans. Algor. 12 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi Proof. Consider any other solution � to the Steiner tree instance. We partition the edges � into of the subpaths such that these subpaths’ endpoints are (i) vertices with degree 3 or larger � , or in (ii) vertices�in and � (which might also have degree 3 or larger �in ). Besides the endpoints, all other vertices in each subpath have degree 2 and are in� but not in�. Note that a vertex may appear as the endpoint of more than one subpath. Note also that the set of vertices in � includes the terminals, which, without loss of generality includes all � . This leaves in along with condition (i) for endpoints ensures the partition into subpaths is well-deined, i.e., if a subpath ends at a leaf of � , that leaf is �in . We also decompose� into subpaths, but some edges may be contained in two of these subpaths. To decompose � into subpaths, we irst partition the edges � into of maximal connected subgraphs�of whose leaves are vertices in� (including terminals) and whose internal vertices ar�e.not Note in that some vertices may appear in more than one subgraph, e.g., an internal vertex in � that is in � becomes a leaf in multiple subgraphs. Since these subgraphs are not necessarily paths, we next take any DFS walk on each of these subgraphs starting from one of their leaves (that is, one of the vertices � ). W in e take the route traversed by the DFS walk, and split it into subpaths at every point where the DFS walk reaches a leaf. This now gives a set of subpaths � such inthat each subpaths’ endpoints are vertices �in , no other vertices on a subpath are in � , and no edge appears in more than two subpaths. Let A and F denote the set of subpaths we decomposed� and � into, respectfully. For � ∈ A let� (�) ⊆ F be the set of subpaths� inF such that � \ �∪ � is a feasible Steiner tree�, can i.e.be swapped for�, and let� (�) = ∪ � (�). We will show that for any � ⊆ A, |� (�)| ≥ |� |. By an extension of �∈� Hall’s Theorem (Fact 15 in [13]) this implies the existence of a weight � function : A ×F → R such that: (1) �(�, �) > 0 only if � can be swapped for� (2) For all subpaths� ∈ A, �(�, �) = 1. �∈� (�) (3) For all subpaths� ∈ F , −1 �(�, �) ≤ 2. �∈� (�) This weight function then gives: ︁ ︁ ︁ ︁ ︁ ︁ �(�) = �(�) = �(�)�(�, �) ≤ �(�)�(�, �) ≤ 2�(�) ≤ 4�(�), −1 �∈A �∈A �∈� (�) �∈F �∈� (�) �∈F −1 where � (�) = {� ∈ A : � ∈ � (�)}. The irst inequality holds by the assumption in the lemma statement that no swaps reduce the cost of� , so for� ∈ A and � ∈ � (�), �(�) ≤ �(�). The last inequality follows by the fact that every edge in� appears in at most two subpaths inF . We now turn towards proving that for any � ⊆ A, |� (�)| ≥ |� |. Fix any� ⊆ A. Suppose that we remove all of the edges on paths � ∈ � from� , and also remove all vertices on these paths except their endpoints. After removing these nodes and edges, we are left with |� | + 1 connected components. Let � be a tree with |� | + 1 vertices, one for each of these connected components, with an edge between any pair of vertices � in whose corresponding components are connected by a subpath� ∈ � . Consider any vertices �ofwith degree at most 2. We claim the corresponding component contains a verte�x.in Let � denote the set of vertices in the corresponding component that are endpoints of subpathsA in . There must be at least one such vertex in � . Furthermore, it is not possible that all of the vertices � arein internal vertices�of with degree at least 3, since at most two subpaths leave this component and there are no cycles �in . The only other option for endpoints is vertices in �, so this component must contain some vertex �in . Applying Lemma 4.1, strictly more than (|� | + 1)/2 (i.e., at least|� |/2+ 1) of the components have degree at most 2, and by the previous argument contain a vertex �in . These vertices are connected by�, and since each subpath inF does not have internal vertices�in , no subpath inF passes through more than two of these components. Hence, at least|� |/2 of the subpaths inF have endpoints in two diferent components because at least|� |/2 edges are required to connect|� |/2+ 1 vertices. In turn, any of these|� |/2 paths � could be swapped ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 13 Minimize � subject to ∀� ⊂ � such that ∅ ⊂ �∩ � ⊂ � : � ≥ 1 (6) �∈�(�) ∀d such that � ∈ [ℓ ,� ] : � � ≤ opt(d) + � (7) � � � � � �∈� ∀� ∈ � : � ∈ [0, 1] (8) Fig. 3. The Robust Steiner Tree Polytope for one of the subpaths� ∈ � that is on the path between the components containing �’s endpoints. This shows that |� (�)| ≥ |� |/2 as desired. □ 5 ALGORITHM FOR THE ROBUST STEINER TREE PROBLEM In this section, our goal is to ind a fractional solution to the LP in Fig. 3 for robust Steiner tree. By Theorem 1.3 and known approximation/integrality gap results for Steiner Tree, this will give the following theorem: Theorem 5.1. There exists a(2755, 64)-robust algorithm for the Steiner tree problem. It is well-known that the standard Steiner tree polytope admits an exact separation oracle (by solving the �, �-min-cut problem using edge weights � for all �, �∈ � ) so it is suicient to ind an approximate separation oracle for the regret constraint set. So, we focus on this section in deriving an approximate separation oracle. Doing so is the most technically diicult part of the paper, so we break the section up into multiple parts as follows: In Section 5.1, we start with a simpler case wher ℓ =e 0 for all edges, and show how the local search algorithm of the previous section can help design the separation oracle in this case. In Section 5.2, we state our main technical lemma (Lemma 5.5), give a high-level overview of its proof, and show how it implies Theorem 5.1. In Section 5.3, we give the algorithm of Lemma 5.5. In Section 5.4, we analyze this algorithm’s guarantees, deferring some proofs that are highly technical and already covered at a high level in Section 5.2. In Section 5.5, we give the deferred proofs. We believe the high-level overview in Section 5.2 captures the main ideas of Sections 5.3, 5.4, and 5.5, and thus a reader who just wants to understand the algorithm and analysis at a high level can stop reading after Section 5.2. A reader who wants a deeper understanding of the algorithm’s design choices and implementation but is not too concerned with the details of the analysis can stop reading after Section 5.3. A reader who wants a deeper understanding of the analysis can stop reading after Section 5.4 and still have a strong understanding of the analysis. 5.1 Special Case where the Lower Bounds on All Edge Lengths Are ℓ = 0 In this section, we give a simple algorithm/analysis for the special ℓ case = 0 for when all edges. First, we create the derived instance of the Steiner tree problem which is � aof cop the y input graph � with edge weights� � + ℓ − ℓ � . As noted earlier, the optimal Steiner�treon e the derived instance maximizes the � � � � � regret of the fractional solution x. However, since Steiner treeNP is-hard, we cannot hope to exactly ind� . We need a Steiner tree� such that the regret caused by it can be bounded against that caused by � . The diiculty is that the regret corresponds to the total weight of edges not in the Steiner tree (plus an ofset that we will address later), whereas standard Steiner tree approximations give guarantees in terms of the total weight of edges in the Steiner tree. We overcome this diiculty by requiring a stricter notion of “diference approximationž ś that the ∗ ∗ ˆ ˆ weight of edges � \ � be bounded against those in � \ � . Note that this condition ensures that not only is the ˆ ˆ weight of edges in � bounded against those in � , but also that the weight of edges not in� is bounded against that of edgesnot in� . We show the following lemma to obtain the diference approximation: ACM Trans. Algor. 14 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi Lemma 5.2. For any � > 0, there exists a polynomial-time algorithm for the Steiner tree problem such thatopt if denotes the set of edges in the optimal solution and �(�) denotes the total weight of edges in�, then for any input instance of Steiner tree, the output solution alg satisies�(alg\ opt) ≤ (4+ �) · �(opt\ alg). Proof. The algorithm we use is the local search algorithm described in Section 4, which alg such inds that �(alg) ≤ 4· �(opt). Suppose that the cost of each edge� ∈ alg∩ opt is now changed from its initial value to 0. After this changealg , remains locally optimal because for every feasible �solution that can be reached by making a local move from alg, the amount by which the cost of alg has decreased by setting edge costs to zero is at least the amount by which � has decreased. Hence no local move causes a decrease in cost. Thus, alg remains a 4-approximation, which implies �(alg that\ opt) ≤ 4· �(opt\ alg). We also need to show that the algorithm converges in polynomially many iterations. The authors 13] in [ achieve this convergence by discretizing all the edge costs to the nearest multiple �(apxof ) for an initial �� solution apx such that �(opt) ≤ �(apx) ≤ ��(opt) (e.g., a simple way to do so is to start with a solution formed by the union of shortest paths between terminals, and then remove edges which cause cycles arbitrarily. This solution has cost between �(opt) and � · �(opt). See Section B.3 of13[] for more details). This guarantees that �� the algorithm converges in iterations, at an additiv �· �e(opt) cost. For a standard approximation algorithm this is not an issue, but for an algorithm that aims for a guarantee of�the(alg form \ opt) ≤ � (1)· �(opt\ alg) an additiv�e· �(opt) might be too much. We modify the algorithm as follows to ensure that it converges in polynomially many iterations: We only consider swapping out � for� if the decrease in cost is at least �/4 times the cost of �, and we always choose the swap of this kind that decreases the cost by the largest amount . We now show the algorithm converges. Later in the section, we will prove two claims, so for brevity’s sake we will not include the proofs of the claims here. The irst claim is that �(as alg long \ opt as) > (4+ �)�(opt\ alg), there is a swap betweenalg and opt where decrease in cost is at least �/4 times the cost of the path being swapped out, and is at least �/4� times�(alg\ opt) (the proof follows similarly to Lemma 5.10 in Section 5.3). The second claim is that in any swap the quantity �(alg\ opt) − �(opt\ alg) decreases by the same amount that �(alg) does (see Lemma 5.12 in Section 5.4). So, we use �(alg\ opt) − �(opt\ alg) as a potential to bound the number of swaps. This potential is initially at most � max � , is always at least min � as long as�(alg\ opt) > �(opt\ alg), and each swap decreases � � � � it multiplicatively by at least a factor (1− �/of 4� ) as long as�(alg\ opt) > (4+ �)�(opt\ alg). Thus the log(� max � /min� ) � � � � algorithm only needs to make swaps to arrive at a solution that is(4+a �)-approximation, − log(1−�/4� ) which is a polynomial in the input size. □ Recall that the regret caused by� is not exactly the weight of edges not� in , but includes a ixed ofset of (ℓ − ℓ � ). Ifℓ = 0 for all edges, i.e., the ofset is 0, then we can recover a robust algorithm from Lemma 5.2 � � � � �∈� alone with much better constants than in Theorem 5.1: Lemma 5.3. For any instance of robust Steiner tree for which all ℓ = 0, for every � > 0 there exists an algorithm RRST-Oracle-ZLB which, given a solution(x, �) to the LP in Fig. 3, either: • Outputs a separating hyperplane for the LP in Fig. 3, or • Outputs łFeasible,ž in which case x is feasible for the (non-robust) Steiner tree LP and∀d : � � ≤ � � �∈� opt(d) + (4+ �)�. �(�)−�(�) Note that �(�) − �(�) and are both maximized by maximizing �(�) and minimizing �(�). Any path � from� to � that we �(�) consider adding is independent of the �paths we can consider removing, since � by deinition does not intersect with our solution. So in inding a swap satisfying these conditions if one exists, it still suices to only consider swaps between � and shortest longest paths paths� in the resulting cycles as before. ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 15 Algorithm 2 RRST-Oracle-ZLB(� (� , �), {� } , (x, �)) � �∈� Input: Undirected graph � (� , �), upper bounds on edge lengths{� } , solutionx(= {� } , �) to the LP � �∈� � �∈� in Fig. 3 1: Check all constraints of the LP in Fig. 3 except the regret constraint return set, any violated constraint that is found 2: � ← copy of� where � has cost � � � � ′ ′ 3: � ← output of algorithm from Lemma 5.2�on 4: if � � > � then � � �∉� 5: return � � ≤ � � � �∉� 6: else 7: return “Feasiblež 8: end if Fig. 4. Separation Oracle for the LP in Fig. 3 when ℓ = 0, ∀� RRST-Oracle-ZLB is given in Fig. 4. Via the ellipsoid method this (1, 4+ giv �)-r esobust a fractional solution. Using Theorem 1.3, the fact that the integrality gap of the LP we use21is ], and 2 [ that there is a(ln4+ �) ≈ 1.39- approximation for Steiner tree [8], with appropriate choice � we get ofthe following corollary: Corollary 5.4. There exists a(2.78, 12.51)-robust algorithm for Steiner tree whenℓ = 0 for all � ∈ �. Proof of Lemma 5.3. All inequalities except the regret constraint set can be checked exactly RRSTby -Oracle- ′ ′ ′ ′ ′ ZLB. Consider the tree� computed inRRST-Oracle-ZLB and d with� = 0 for� ∈ � and � = � for � � � ∉ � . The only other violated inequality RRST-Oracle-ZLB can output is the inequality ′ � � ≤ � in �∉� � � ′ ′ ′ line 5, which is equivalent to� � ≤ � (d ) + �, an inequality from the regret constraint set. Furthermore, �∈� � RRST-Oracle-ZLB only outputs this inequality when it is actually violated. So, it suices to show that if there existsd, sol such that � � > sol(d) + (4+ �)� then RRST-Oracle-ZLB outputs a violated inequality on � � �∈� line 5, i.e., inds Steiner�tresuch e that ′ � � > �. � � �∉� Suppose there existsd, sol such that � � > sol(d) + (4+ �)�. Let d be the vector obtained from d by � � �∈� replacing � with� for edges not in sol and with 0 for edges in sol. Replacing d withd can only increase � � � � − sol(d), i.e.: � � �∈� ︁ ︁ ∗ ∗ � � = � � > sol(d ) + (4+ �)� = (4+ �)� . (9) � � � �∉sol �∈� Consider the graph� made by RRST-Oracle-ZLB. We’ll partition the edges into four�sets, , � , � , � 0 � � �� ′ ′ ′ ′ ′ ′ where � = � \ (sol∪ � ), � = sol\ � , � = � \ sol, � = sol∩ � . Let �(� ) for� = � , � , � , � denote 0 � � �� 0 � � �� ′ ′ ′ � � , i.e., the total cost of the edge set � in� . Sinced has � = 0 for� ∈ sol, from(9) we get that � � � �∈� �(� ) + �(� ) > (4+ �)�. 0 � Now note that ′ � � = �(� ) + �(� ). Lemma 5.2 gives that(4+ �)�(� ) ≥ �(� ). Putting it all together, � � 0 � � � �∉� we get that: �(� ) �(� ) + �(� ) (4+ �)� � 0 � � � = �(� ) + �(� ) ≥ �(� ) + ≥ > = � . � � 0 � 0 4+ � 4+ � 4+ � �∉� ACM Trans. Algor. 16 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi 5.2 General Case for Arbitrary Lower Bounds on Edge Lengths: High Level Overview In this section, we give our main lemma (Lemma 5.5), a high-level overview of the algorithm and analysis proving this lemma, and show how the lemma implies Theorem 5.1. In the general case, the approximation guarantee given in Lemma 5.2 alone does not suice because of the ofset of (ℓ − ℓ � ). We instead rely on a stronger notion of approximation formalized in the next lemma � � � �∈� that provides simultaneous approximation guarantees on two sets of edge weights: � = � � − ℓ � + ℓ and � � � � � � ′ ′ � = ℓ − ℓ � . The guarantee on � can then be used to handle the ofset. � � � Lemma 5.5. Let � be a graph with terminals� and two sets of edge weights� and � . Let sol be any Steiner tree ′ 4 connecting� . Let Γ > 1, � > 0, and 0 < � < be ixed constants. Then there exists a constantΓ (depending on Γ , �, �) and an algorithm that obtains a collection of Steiner trealg es , at least one of which (let us call italg ) satisies: • �(alg \ sol) ≤ 4Γ · �(sol\ alg ), and � � ′ ′ ′ • � (alg ) ≤ (4Γ + � + 1+ �) · � (sol). The fact that Lemma 5.5 generates multiple solutions (but only polynomially many) is ine because as long as we can show that one of these solutions causes suicient regret, our separation oracle can just iterate over all solutions until it inds one that causes suicient regret. We give a high level sketch of the proof of Lemma 5.5 here, and defer the full details to Section 5.4. The algorithm uses the ideaalternate of minimization , alternating between a “forward phasež and a “backward phase.ž The goal of each forward phase/backward phase pair is to ke � ep(alg) approximately ixed while obtaining a net decrease in�(alg). In the forward phase, the algorithm greedily uses local search, choosing swaps that decrease � and increase� at the best “rate of exchangež between the two costs (i.e., the maximum ratio of decrease in ′ ′ � to increase in � ), until � (alg) has increased past some upper threshold. Then, in the backward phase, the ′ ′ algorithm greedily chooses swaps that decrease � while increasing � at the best rate of exchange, until � (alg) reaches some lower threshold, at which point we start a new forward phase. We guess the value of� (sol) (we can run many instances of this algorithm and generate diferent solutions based on diferent guesses for this purpose) and set the upper threshold�for(alg) appropriately so that we satisfy the approximation guarantee�for . For � we show that as long asalg is not a4Γ-diference approximation with respect to� then a forward/backward phase pair reduces �(alg) by a non-negligible amount (of course, if alg is a4Γ-diference approximation then we are done). This implies that after enough iterations, alg must be a 4Γ-diference approximation�(as alg) can only decrease by a bounded amount. To show this, we claim that while alg is not a4Γ-diference approximation, for suicientlyΓlarge the following bounds on rates of exchange hold: • For each swap in the forward phase, the ratio of decrease�(in alg) to increase in � (alg) is at least some �(alg\sol) constant � times . 1 ′ � (sol\alg) • For each swap in the backward phase, the ratio of increase �(alg in) to decrease in� (alg) is at most some �(sol\alg) constant � times . 2 ′ � (alg\sol) Before we discuss how to prove this claim, let us see why this claim implies that a forward phase/backward phase pair results in a net decrease�in (alg). If this claim holds, suppose we set the lower threshold � (for alg) to be, ′ ′ ′ say, 101� (sol). That is, throughout the backward phase, we have� (alg) > 101� (sol). This lower threshold lets us rewrite our upper bound on the rate of exchange in the backward phase in terms of the lower bound on rate of exchange for the forward phase: �(sol\ alg) �(sol\ alg) �(sol\ alg) �(sol\ alg) � ≤ � ≤ � ≤ � 2 2 2 2 ′ ′ ′ ′ ′ � (alg\ sol) � (alg) − � (sol) 100� (sol) 100� (sol\ alg) ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 17 1 �(alg\ sol) � �(alg\ sol) ≤ � = · � . 2 1 ′ ′ 4Γ 100� (sol\ alg) 400Γ� � (sol\ alg) In other words, the bound in the claim for the rate of exchange in the forward phase is larger than the bound for the backward phase by a multiplicative factor Ω(1)of· Γ. While these bounds depend on alg and thus will change with every swap, let us make the simplifying assumption that through one forward phase/backward phase pair these bounds remain constant. Then, the change �in(alg) in one phase is just the rate of exchange for that phase times the change in � (alg), which by deinition of the algorithm is roughly equal in absolute value for the forward and backward phase. So this implies that the decrease �(alg in) in the forward phase is Ω(1) · Γ times the increase in �(alg) in the backward phase, i.e., the net change across the phases is a non-negligible decrease in�(alg) ifΓ is suiciently large. Without the simplifying assumption, we can still show that the decrease in �(alg) in the forward phase is larger than the increase �(alg in) in the backward phase for large enough Γ using a much more technical analysis. In particular, our analysis will show there is a net decrease as long as: ( ) √ √ 4Γ − 1 (4Γ − 1)( Γ − 1)( Γ − 1− �)� ′ ′ � (4Γ +�+1+�) min , − (� − 1) > 0, (10) 8Γ 16(1+ �)Γ where 4(1+ �)Γ � = √ √ . ′ ′ ′ ( Γ − 1)( Γ − 1− �)(4Γ − 1)(4Γ − 1) Note that for any positiv �,e�,Γ , there exists a suiciently large valueΓ for of (10) to hold, since asΓ → ∞, we have � → 0, so that ′ ′ � (4Γ +�+1+�) (� − 1) → 0 and ( ) √ √ 4Γ − 1 (4Γ − 1)( Γ − 1)( Γ − 1− �)� min , → min{1/2, �/(4+ 4�)}. 8Γ 16(1+ �)Γ So, the same intuition holds: as long as we are willing to lose a large Γ value enough , we can make the increase in�(alg) due to the backward phase negligible compared to the decrease in the forward phase, giving us a net decrease. It remains to argue that the claimed bounds on rates of exchange hold. Let us argue the claim Γ =for 4, although the ideas easily generalize to other choices Γ. Wof e do this by generalizing the analysis giving Lemma 5.2. This analysis shows that alg if is a locally optimal solution, then �(alg\ sol) ≤ 4· �(sol\ alg), i.e.,alg is a 4-diference approximation solof . The contrapositive of this statement is that alg if is not a 4- diference approximation, then there is at least one swap that will improve it by some amount. We modify the approach of [13] by weakening the notion of locally optimal. In particular, suppose we deine aalg solution to be “approximatelyž locally optimal if at least half of the (weighted) swaps betw �ein enalg paths\ sol and paths � insol\ alg satisfy �(�) ≤ 2�(�) (as opposed to �(�) ≤ �(�) in a locally optimal solution; the choice of 2 for both constants here implies Γ = 4). Then a modiication of the analysis of the local search algorithm, losing an additional factor of 4, shows that algif is approximately locally optimal, then �(alg\ sol) ≤ 16· �(sol\ alg). The contrapositive of this statement, however, has a stronger consequence than befor alg e: if is not a 16-diference approximation, then a weighted half of the swaps satisfy �(�) > 2�(�), i.e. reduce�(alg) by at least �(�) − �(�) > �(�) − �(�)/2 = �(�)/2. The decrease in�(alg) due to all of these swaps together is at least �(alg\ sol) times some constant. In addition, ′ ′ ′ since a swap between� and � increases� (alg) by at most � (�), the total increase in � due to these swaps ACM Trans. Algor. 18 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi Algorithm 3 RRST-Oracle(� (� , �), {[ℓ ,� ]} , (x, �)) � � �∈� Input:Undirected graph � (� , �), lower and upper bounds on edge lengths{[ℓ ,� ]} , solutionx( = � � �∈� {� } , �) to the LP in Fig. 3 � �∈� 1: Check all constraints of the LP in Fig. 3 except regret constraint return set, any violated constraint that is found ′ ′ 2: � ← copy of� where � = � � − ℓ � + ℓ , � = ℓ − ℓ � � � � � � � � � � 3: alg ← output of algorithm from Lemma 5.5�on 4: for alg ∈ alg do Í Í Í 5: if � � + ℓ � − ℓ > � then � � � � � �∉alg �∈alg �∈alg � � � Í Í Í 6: return � � + ℓ � − ℓ ≤ � � � � � � �∉alg �∈alg �∈alg � � � 7: end if 8: end for 9: return “Feasiblež Fig. 5. Separation Oracle for LP in Fig. 3 is at most� (sol \ alg) times some other constant. An averaging argument then gives the rate of exchange bound for the forward phase in the claim, as desired. The rate of exchange bound for the backward phase follows analogously. This completes the algorithm and proof summary, although more details are needed to formalize these arguments. Moreover, we also need to show that the algorithm runs in polynomial time. We now formally deine our separation oracle RRST-Oracle in Fig. 5 and prove that it is an approximate separation oracle in the lemma below: Lemma 5.6. Fix anyΓ > 1, � > 0, 0 < � < 4/35 and let Γ be the constant given in Lemma 5.5. Let� = (4Γ + � + 2+ �)4Γ + 1 and � = 4Γ. Then there exists an algorithmRRST-Oracle that given a solution(x, �) to the LP in Fig. 3 either: • Outputs a separating hyperplane for the LP in Fig. 3, or • Outputs łFeasible,ž in which case x is feasible for the (non-robust) Steiner tree LP and ∀d : � � ≤ � · opt(d) + � · � . � � �∈� Proof. It suices to show that if there exists d, sol such that ︁ ︁ � � > � · sol(d) + � · �, i.e., � � − � · sol(d) > � · � � � � � �∈� �∈� then RRST-Oracle outputs a violated inequality on line 6, i.e., the algorithm inds a Steiner � such that tree ︁ ︁ ︁ � � + ℓ � − ℓ > � . � � � � � ′ ′ ′ �∉� �∈� �∈� Notice that in the inequality � � − � · sol(d) > � · �, � � �∈� ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 19 ′ ′ ′ replacing d withd where � = ℓ when � ∈ sol and � = � when � ∉ sol can only increase the left hand side. � � � � So replacing d withd and rearranging terms, we have " # ︁ ︁ ︁ ︁ ︁ ℓ � + � � > � ℓ + � · � = ℓ + (� − 1) ℓ + � · � . � � � � � � � �∈sol �∉sol �∈sol �∈sol �∈sol In particular, the regret of the fractional solution against sol is at least(� − 1) ℓ + � · �. �∈sol � ′ ′ Let � be the Steiner tree satisfying the conditions of Lemma 5.5 � =with � � − ℓ � + ℓ and � = ℓ − ℓ � . � � � � � � � � � ′ ′ ′ ′ Let � = � \(sol∪� ), � = sol\� , and � = � \ sol. Let �(� ) = ′(� � − ℓ � + ℓ ), i.e., the total weight 0 � � �∈� � � � � � ′ ′ ′ of the edges� in� . Now, note that the regret of the fractional solution against a solution using � is: edges ︁ ︁ ︁ ︁ ︁ � � + ℓ � − ℓ = (� � − ℓ � + ℓ ) − (ℓ − ℓ � ) � � � � � � � � � � � � � ′ ′ ′ ′ �∉� �∈� �∈� �∉� �∈� = �(� \ � ) − (ℓ − ℓ � ). � � � �∈� Plugging in � = sol, we then get that: ︁ ︁ �(� ) + �(� ) − (ℓ − ℓ � ) > (� − 1) ℓ + � · � . 0 � � � � � �∈sol �∈� Isolating �(� ) then gives: ︁ ︁ ︁ �(� ) > (� − 1) ℓ + � · �− (� � − ℓ � + ℓ ) + (ℓ − ℓ � ) � � � � � � � � � � �∈sol �∈� �∈� ︁ ︁ ︁ = (� − 1) ℓ + � · �− � � + (ℓ − ℓ � ). � � � � � � �∈sol �∈� �∉� 0 0 Since� = 4Γ, Lemma 5.5 along with an appropriate choice � giv of es that�(� ) ≤ ��(� ), and thus: � � " # ︁ ︁ ︁ �(� ) > (� − 1) ℓ + � · �− � � + (ℓ − ℓ � ) . � � � � � � � �∈sol �∈� �∉� 0 0 Recall that our goal is to show that �(� ) + �(� ) − (ℓ − ℓ � ) > �, i.e., that the regret of the fractional 0 � � � � �∈� solution against � is at least �. Adding�(� ) − (ℓ − ℓ � ) to both sides of the previous inequality, we can 0 � � � �∈� lower bound�(� ) + �(� ) − (ℓ − ℓ � ) as follows: 0 � � � � �∈� ACM Trans. Algor. 20 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi �(� ) + �(� ) − (ℓ − ℓ � ) 0 � � � � �∈� " # ︁ ︁ ︁ > (� − 1) ℓ + � · �− � � + (ℓ − ℓ � ) � � � � � � �∈sol �∈� �∉� 0 0 ︁ ︁ + (� � − ℓ � + ℓ ) − (ℓ − ℓ � ) � � � � � � � � �∈� �∈� ︁ ︁ ︁ ︁ � − 1 � − 1 1 = �+ ℓ + (ℓ − ℓ � ) + � � − (ℓ − ℓ � ) � � � � � � � � � � � � �∈sol �∉� �∈� �∉� 0 0 0 ︁ ︁ ︁ ︁ � − 1− � 1 � − 1 ≥ �+ ℓ + (ℓ − ℓ � ) + � � − (ℓ − ℓ � ) ≥ � . � � � � � � � � � � � � �∈sol �∉� �∈� �∈� 0 0 � Here, the last inequality holds because by our setting �, wof e have � − 1− � = 4Γ + � + 1+ �, and thus Lemma 5.5 gives that ︁ ︁ ︁ � − 1− � � − 1− � (ℓ − ℓ � ) ≤ (ℓ − ℓ � ) ≤ ℓ . � � � � � � � � � �∈sol �∈sol �∈� By using Lemma 5.6 with the ellipsoid method and the fact that the LP optimum is mrat, w most e get an (�, �)-robust fractional solution. Then, Theorem 1.3 and known approximation/integrality gap results give us the following theorem, which with appropriate choice of constants gives Theorem 5.1: Theorem 5.7. Fix anyΓ > 1, � > 0, 0 < � < 4/35 and let Γ be the constant given in Lemma 5.5. Let� = (4Γ +�+2+�)4Γ+1 and � = 4Γ. Then there exists a polynomial-time(2� ln4+�,2� ln4+ln4+�)-robust algorithm for the Steiner tree problem. Proof of Theorem 5.7. By using the ellipsoid method with Lemma 5.6 we can compute a feasible (�, �)-robust fractional solution to the Steiner tree LP (as the robust Steiner tree LP has optimummr at ).most Then, the theorem follows from Theorem 1.3, and the fact that the polytope in Fig. 3 has integrality � = 2 and gap there is a� = (ln4+ �)-approximation for the Steiner tree problem due 8]to (The [ error parameters can be rescaled appropriately to get the approximation guarantee in the theorem statement). □ Optimizing �for in Theorem 5.7 subject to the constraints in (10), we get that for a ixed (small) �, � is minimized by setting Γ ≈ 9.284+ �(�), Γ ≈ 5.621+ �(�), � ≈ 2.241+ �(�) (for monotonically increasing � , � , � 1 2 3 1 2 3 which approach 0 as� approaches 0). Plugging in these values gives Theorem 5.1. 5.3 Algorithm Description In this section we give the algorithm description DoubleAppro for x, as well as a few lemmas that motivate our algorithm’s design and certify it is eicient. We will again use local search to ind moves that are improving with respect to �. However, now our goal is to show that we can do this without blowing up the cost with resp � .ect to We can start to show this via the following lemma, which generalizes the arguments in Section 4. Informally, it says that as long as a signiicant fraction (1/�) of the swaps (rather than all the swaps) that the local search ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 21 algorithm can make between its solution � and an adversarial solution � do not improve its objective by some factor� (rather than by any amount at all),� ’s cost can still be bounded by 4�� times�’s cost. ′ ′ From Lemma 5.8 to Lemma 5.11 we will refer to the cost functions on edges �, �by instead of �, � . This is because these lemmas are agnostic to the cost functions they are applied to and will be applied with both ′ ′ ′ ′ −1 � = �, � = � and � = � , � = � in our algorithm. We also deine A,F , �, � (·), � (·) as in the proof of Theorem 4.2 for these lemmas. Lemma 5.8. Let � and � be solutions to an instance of Steiner tree with edge costs � such that if all edges in� ∩ � have their costs set to 0, then for� ≥ 1, � ≥ 1, we have ︁ ︁ �(�, �)� (�) ≥ �(�, �)� (�). �∈A,�∈� (�):� (�)≤�� (�) �∈A,�∈� (�) Then � (� \ �) ≤ 4���(� \ �). Proof. This follows by generalizing the argument in the proof of Theorem 4.2. After setting costs of edges in � ∩ � to 0, note that � (�) = � (� \ �) and � (�) = � (� \ �). Then: � (� \ �) = � (�) �∈A ︁ ︁ ≤ �(�, �)� (�) �∈A �∈� (�) ︁ ︁ = �(�, �)� (�) −1 �∈� �∈� (�) ︁ ︁ ≤ � �(�, �)� (�) −1 �∈� �∈� (�):� (�)≤�� (�) ︁ ︁ ≤ �� �(�, �)� (�) −1 �∈� �∈� (�):� (�)≤�� (�) ︁ ︁ ≤ �� �(�, �)� (�) ≤ 4���(� \ �). −1 �∈� �∈� (�) Corollary 5.9. Let � , � be solutions to an instance of Steiner tree with edge costs � such that for parameters � ≥ 1, � ≥ 1, � (� \ �) > 4���(� \ �). Then after setting the cost of all edges in� ∩ � to 0, ︁ ︁ � − 1 �(�, �)� (�) > �(�, �)� (�). �∈A,�∈� (�):� (�)>�� (�) �∈A,�∈� (�) The corollary efectively tells us that � (if � \ �) is suiciently larger than � (� \ �), then there are many local swaps between � in� and � in� that decrease � (�) by a large fraction �of(�). The next lemma then shows that one of these swaps also does not increase � (�) by a large factor (even if instead of swapping �, win e swap in an approximation�),ofand reduces � (�) by a non-negligible amount. Lemma 5.10. Let � and � be solutions to an instance of Steiner tree with two sets of edge costs, � and � , such that for parameter Γ > 1, � (� \ �) > 4Γ · � (� \ �). Fix any0 < � < Γ − 1. Then there exists a swap ′ ′ (1+�)� (�)−� (�) 4(1+�)Γ � (�\� ) √ √ between � ∈ A and a path � between two vertices in� such that ≤ · and � (�)−(1+�)� (�) � (� \�) ( Γ−1)( Γ−1−�) � (�) − (1+ �)� (�) ≥ � (� \ �). ACM Trans. Algor. 22 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi Proof. We use an averaging argument to prove the lemma. Consider the quantity ′ ′ 1 �(�, �)[(1+ �)� (�) − � (�)] �∈A,�∈� (�):� (�)> Γ� (�),� (�)−(1+�)� (�)≥ � (� \�) � = , �(�, �)[� (�) − (1+ �)� (�)] �∈A,�∈� (�):� (�)> Γ� (�),� (�)−(1+�)� (�)≥ � (� \�) which is the ratio of the weighted average of incr�ease to the inweighted average of decrease in � over all swaps where � (�) > Γ� (�) and � (�) − (1+ �)� (�) ≥ � (� \ �). For any edge � in� ∩ �, it is also a subpath � ∈ A ∩ F for which the only � ∈ A such that � ∪ � \ � is feasible�is = �. So for all such � we can assume that � is deined such that �(�, �) = 1, �(�, �) = 0 for� ≠ �, and �(�, �) = 0 for� ≠ �. Clearly� (�) > Γ� (�) does not hold, so no swap with a positiv � value e in either sum involves edges in � ∩ �. So we can now set the cost with respect to both�, � of edges in � ∩ � to 0, and doing so does not afect the quantity�. Then, the numerator can be upper bounded by 4(1+ �)� (� \ �). For the denominator, we irst observe that �(�, �)[� (�) − (1+ �)� (�)] ≥ �∈A,�∈� (�):� (�)> Γ� (�),� (�)−(1+�)� (�)≥ � (� \�) �(�, �)[� (�) − (1+ �)� (�)]− �∈A,�∈� (�):� (�)> Γ� (�) �(�, �)[� (�) − (1+ �)� (�)]. (11) �∈A,�∈� (�):� (�)−(1+�)� (�)< � (� \�) The second term on the right-hand side of (11) is upper bounded by: �(�, �)[� (�) − (1+ �)� (�)] ≤ �∈A,�∈� (�):� (�)−(1+�)� (�)< � (� \�) 1 1 �(�, �)� (� \ �) ≤ � (� \ �). � � �∈A,�∈� (�):� (�)−(1+�)� (�)< � (� \�) The inequality follows because there are at � most diferent� ∈ A, and for each one we have �(�, �) = 1. �∈� We next use Corollary 5.9 (setting both parameters toΓ) to get the following lower bound on the irst term in (11): ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 23 �(�, �)[� (�) − (1+ �)� (�)] �∈A,�∈� (�):� (�)> Γ� (�) 1+ � ≥ �(�, �)[� (�) − � (�)] �∈A,�∈� (�):� (�)> Γ� (�) Γ − 1− � = √ �(�, �)� (�) �∈A,�∈� (�):� (�)> Γ� (�) √ √ ( Γ − 1)( Γ − 1− �) ≥ �(�, �)� (�) �∈A,�∈� (�) √ √ ( Γ − 1)( Γ − 1− �) = � (� \ �). √ √ ( Γ−1)( Γ−1−�) This lower bounds the denominator �ofby − 1/� · � (� \ �). By proper choice of �, for suiciently large � we can ignore the1/� term. Then, combining the bounds implies�that is at most ′ ′ 4(1+� )Γ � (�\� ) √ √ · . In turn, one of the swaps being summed over in � satisies the lemma statement.□ � (� \�) ( Γ−1)( Γ−1−� ) We now almost have the tools to state our algorithm and prove Lemma 5.5. However, the local search process is now concerned with two edge costs, so just considering adding the shortest path with resp � beetw ct toeen each pair of vertices and deleting a subset of vertices in the induced cycle will not suice. We instead use the following lemma: ′ ′ Lemma 5.11. Given a graph� = (� , �) with edge costs� and � , two vertices� and �, and input parameter� , ′ ′ let � be the shortest path from � to � with respect to� whose cost with respect to� is at most� . For all � > 0, there is a polynomial time algorithm that inds a path from � to � whose cost with respect to� is at most� (�) and whose ′ ′ cost with respect to� is at most(1+ �)� . Proof. If all edge lengths with respect�toare multiples of Δ, an optimal solution can be found in time poly(|� |,|�|,� /Δ) via dynamic programming: ℓLet(�, �) be the length of the shortest path from � to � with respect to � whose cost with respect to� is at most�· Δ. Usingℓ(�, �) = 0 for all � and the recurrence ℓ(�, �) ≤ ℓ(�, �− (� /Δ)) + � for edge(�, �), we can compute ℓ(�, �) for all �, �and use backtracking from �� �� ′ ′ ℓ(�,� ) to retrieve� in poly(|� |,|�|,� /Δ) time. To get the runtime down to polynomial, we use a standard rounding trick, rounding � each down to the nearest multiple�of � /|� |. After rounding, the runtime of the dynamic programming algorithm poly(|�is|,|�|, ) = �� /|� | 1 ′ poly(|� |,|�|, ). Any path has at most|� | edges, and so its cost decreases by at most�� in this rounding process, ′ ′ i.e., all paths considered by the algorithm have cost with resp �ectoftoat most(1+ �)� . Lastly, since �’s cost with respect to� only decreases,� (�) still upper bounds the cost of the shortest path considered by the algorithm with respect�to. □ The idea is to run a local search with respe�ctstarting to with a good approximation with resp�e.ctOur to algorithm alternates between a “forwardž and “backwardž phase. In the forward phase, we use Lemma 5.11 to decide which paths can be added to the solution in local search moves. The local search takes any swap that causes both �(alg) and � (alg) to decrease if any exists. Otherwise, it picks the swap betw � ∈ealg en and � that ′ ′ � (�)−� (�) ′ ′ among all swaps where�(�) < �(�) and � (�) ≤ � (sol) minimizes the ratio (we assume we know the �(�)−�(�) ′ ′ value of� (sol), as can guess many values, and our algorithm will work for the right value � (sol for)). ACM Trans. Algor. 24 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi ′ ′ Algorithm 4 DoubleApprox(� = (� , �), �, � , Γ, Γ , �) ′ ′ � ′ Input: A graph � = (� , �) with terminal �setand cost functions �, � for which all � are a multiple of� , ′ ′ ′ ′ ′ � such that � ∈ [�(sol),(1+ �) · �(sol)], � such that � ∈ [� (sol),(1+ �) · � (sol)], constants Γ, Γ , �. 1: �← 0 (0) ′ ′ 2: alg ← �-approximation of optimal Steiner tree with resp � efor ct to � < 4Γ (�) (�−2) 3: while �= 0 or �(alg ) < �(alg ) do max � � � log � (�) 1+� min � � � 4: for � ∈ {min � ,(1+�) min � , . . .(1+�) min� } do {Iterate over guesses for�(alg \ sol)} � � � � � � (�+1) (�) 5: alg ← alg {Forward phase} (�+1) (�+1) ′ ′ ′ (�) 6: while � (alg ) ≤ (4Γ + �)� and �(alg ) > �(alg ) − �/2 do � � (�+1) (�+1) ′ ′ 1 7: (alg , ����) ← GreedySwap(alg , �, �, � , �) � � 10� 8: if ����= 1 then 9: break while loop starting on line 6 10: end if 11: end while (�+2) (�+1) 12: alg ← alg {Backward phase} � � (�+2) ′ ′ ′ 13: while � (alg ) ≥ 4Γ � do (�+2) (�+2) ′ � ′ 14: (alg ,∼) ← GreedySwap(alg , � , �, �, � ) � � 15: end while 16: end for (�+2) (�+2) 17: alg ← argmin (�+2) �(alg ) alg 18: �← �+ 2 19: end while (�) 20: return all values of alg stored for any value of �, � ′ ′ Fig. 6. Algorithm DoubleApprox, which finds alg such that �(alg\ sol) ≤ � (1) · �(sol\ alg) and � (alg) ≤ � (1) · � (sol) If the algorithm only made swaps of this form, how�e(valg er, ) might become a very poor approximation ′ ′ ′ ′ ′ of� (sol). To control for this, when � (alg) exceeds (4Γ + �) · � (sol) for some constantΓ > 1, we begin a “backward phasež: We take the opposite approach, greedily choosing either swaps that improve�band oth� or �(�)−�(�) ′ ′ ′ that improve� and minimize the ratio , until � (alg) has been reduced by at least� · � (sol). At this ′ ′ � (�)−� (�) point, we begin a new forward phase. The intuition for the analysis is as follows: If, throughout a forwar �(alg d phase \ sol,) ≥ 4Γ · �(sol\ alg), Lemma 5.10 tells us that there is swap where the increase� in (alg) will be very small relative to the decrease in �(alg). (Note that our goal is to reduce the cost of �(alg\ sol) to something belowΓ4·�(sol\ alg).) Throughout ′ ′ ′ ′ ′ ′ the subsequent backward phase, we have � (alg) > 4Γ ·� (sol), which implies � (alg\sol) > 4Γ ·� (sol\alg). So Lemma 5.10 also implies that the total increase �(alg in) will be very small relative to the decrease � (alg in). Since the absolute change in � (alg) is similar between the two phases, one forward and one backward phase should decrease�(alg) overall. The formal description of the backward and forward phase is given as algorithm DoubleApprox in Figure 6. For the lemmas/corollaries in the following section, we implicitly assume that we kno � and w values � of satisfying the conditions DoubleAppro of x. When we conclude by proving Lemma 5.5, we will simply call ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 25 ′ ′ Algorithm 5 GreedySwap(alg, �, � , � , �) ′ ′ ′ ′ Input: Solution alg, cost functions �, � on the edges, � ∈ [� (sol),(1+�)� (sol)], minimum improvement per swap � 1: �����← ∅ ′ 2 ⌈log � ⌉ 1+� 2: for � ∈ {1, 1+ �,(1+ �) , . . . ,(1+ �) } do 3: for �, �∈ alg do ′ ′ ˆ ˆ 4: Find a(1+�)-approximation � of the shortest path� from� to �with respect to� such that � (�) ≤ � , 5: �∩ alg = {�, �} (via Lemma 5.11) 6: for all maximal � ⊆ alg such that alg∪ �\ � is feasible � (�,) − � (�) ≥ � do 7: �����← �����∪{(�, �)} 8: end for 9: end for 10: end for 11: if ����� = ∅ then 12: return (alg, 1) 13: end if ′ ′ � (�)−� (�) ∗ ∗ 14: (� , � ) ← argmin (�,�)∈����� � (�)−� (�) ∗ ∗ 15: return (alg∪ � \ � , 0) Fig. 7. Algorithm GreedySwap, which finds a swap with the properties described in Lemma 5.10 ′ ′ DoubleApprox for every reasonable value of �, � that is a power of+1 �, and one of these runs will hav �,e� satisfying the conditions. Furthermore, there are multiple error parameters in our algorithm and its analysis. For simplicity of presentation, we use the same�value for all error parameters in the algorithm and its analysis. 5.4 Algorithm Analysis and Proof of Lemma 5.5 In this section we analyze DoubleApprox and give the proof of Lemma 5.5. We skip the proof of some technical lemmas whose main ideas have been covered already. We irst make some observations. The irst lets us relate the decrease in cost of a solution alg to the decrease in the cost of alg\ sol. Lemma 5.12. Let alg, alg , sol be any Steiner tree solutions to a given instance. Then ′ ′ ′ �(alg) − �(alg ) = [�(alg\ sol) − �(sol\ alg)] − [�(alg \ sol) − �(sol\ alg )]. ′ ′ Proof. By symmetry, the contribution of edgesalg in∩ alg and edges in neither alg nor alg to both the left and right hand side of the equality is zero, so it suices to show that all alg edges ⊕ alg incontribute equally to the left and right hand side. ′ ′ Consider any� ∈ alg\ alg . Its contribution �to(alg) − �(alg ) is�(�). If� ∈ alg\ sol, then � contributes ′ ′ �(�) to �(alg\ sol)− �(sol\ alg) and 0 to−[�(alg \ sol)− �(sol\ alg )]. If� ∈ alg∩ sol, then � contributes ′ ′ 0 to �(alg\ sol) − �(sol\ alg) and �(�) to −[�(alg \ sol) − �(sol\ alg )]. So the total contribution�of to ′ ′ [�(alg\ sol) − �(sol\ alg)] − [�(alg \ sol) − �(sol\ alg )] is�(�). ′ ′ Similarly, consider � ∈ alg \ alg. Its contribution �to(alg) − �(alg ) is−�(�). If� ∈ sol \ alg, then � ′ ′ contributes−�(�) to �(alg \ sol) − �(sol \ alg) and 0 to [�(alg \ sol) − �(sol \ alg )]. If� ∉ sol, then � ACM Trans. Algor. 26 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi ′ ′ contributes0 to �(alg\sol)−�(sol\alg) and−�(�) to−[�(alg \sol)−�(sol\alg )]. So the total contribution ′ ′ of� to [�(alg\ sol) − �(sol\ alg)] − [�(alg \ sol) − �(sol\ alg )] is−�(�). □ Lemma 5.12 is useful because Lemma 5.10 relates the ratio of change �, �in to �(alg\ sol), but it is diicult to track how �(alg\ sol) changes as we make swaps that improve�(alg). For example,�(alg\ sol) does not necessarily decrease with swaps that cause �(alg) to decrease (e.g. consider a swap that adds a light edge not in sol and removes a heavy edge insol). Whenever �(alg\ sol) ≫ �(sol\ alg) (if this doesn’t hold, we have a good approximation and are done), �(alg\ sol) and �(alg\ sol)− �(sol\ alg) are of by a multiplicative factor that is very close to 1, and thus we can relate the ratio of changes in Lemma �(7alg to \ sol) − �(sol\ alg) instead at a small loss in the constant, and by Lemma 5.12 changes in this quantity are much easier to track over the course of the algorithm, simplifying our analysis greatly. The next lemma lets us assume that any backward phase uses polynomially many calls Greed to ySwap. ′ ′ ′ ′ ′ Lemma 5.13. Let � be any value such that � ∈ [� (sol),(1+ �)� (sol)], and suppose we round all � up to the � ′ ′ ′ nearest multiple of � for some 0 < � < 1. Then any �-approximation ofsol with respect to� using the rounded� values is an�(1+ 2�)-approximation ofsol with respect to� using the original edge costs. Proof. This follows because the rounding can only increase the cost of any solution, and the cost increases by ′ ′ ′ at most � � ≤ �(1+ �)� (sol) ≤ 2��(sol). □ Via this lemma, we will assume � ar all e already rounded. The following two lemmas formalize the intuition given in Section 5.2; in particular, by using bounds on the “rate of exchange,ž they show that the decr �ease in in the forward phase can be lower bounded, and the increase �in in the backward phase can be upper bounded. Their proofs are highly technical and largely follow the intuition given in Section 5.2, so we defer them to the following section. Lemma 5.14 (Forward Phase Analysis). For any even �in algorithmDoubleApprox, let � be the power of(1+�) (�+1) (�) (�) timesmin � such that � ∈ [�(alg \sol),(1+�)�(alg \sol)]. Suppose all values of alg and the inal value � � � (�+1) (�+1) (�) (�) (�) of alg inDoubleApprox satisfy�(alg \sol) > 4Γ·�(sol\alg ) and �(alg \sol) > 4Γ·�(sol\alg ). � � (�+1) (�) Then for 0 < � < 2/3− 5/12Γ, the inal values of alg , alg satisfy ( ) √ √ 4Γ − 1 (4Γ − 1)( Γ − 1)( Γ − 1− �)� (�+1) (�+1) (�) �(alg ) − �(alg ) ≥ min , · �(alg \ sol). � � 8Γ 16(1+ �)Γ Lemma 5.15 (Backward Phase Analysis). Fix any even�+ 2 in algorithmDoubleApprox and any value of �. (�+1) (�+2) ′ ′ � (alg )−� (alg ) (�+2) (�+2) (�+2) � � Suppose all values of alg satisfy�(alg \ sol) > 4Γ · �(sol\ alg ). Let � = . Then � � � � (sol) for 4(1+ �)Γ � = √ √ , ′ ′ ′ ( Γ − 1)( Γ − 1− �)(4Γ − 1)(4Γ − 1) (�+1) (�+2) the inal values of alg , alg satisfy � � (�+2) (�+1) (�+1) � � �(alg ) − �(alg ) ≤ (� − 1) · �(alg \ sol). � � � By combining the two preceding lemmas, we can show that as long algasis a poor diference approximation, (�) a combination of the forward and backward phase collectively de�cr(alg ease ) ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 27 Corollary 5.16. Fix any positive even value of �+ 2 in algorithmDoubleApprox, and let � be the power of (1+�) (�+1) (�) (�) timesmin � such that � ∈ [�(alg \sol),(1+�)�(alg \sol)]. Suppose all values of alg and the inal value � � � (�+1) (�+1) (�) (�) (�) of alg inDoubleApprox satisfy�(alg \sol) > 4Γ·�(sol\alg ) and �(alg \sol) > 4Γ·�(sol\alg ). � � ′ (�+2) (�) Then for 0 < � < 2/3− 5/12Γ and � as deined in Lemma 5.15, the inal values ofalg , alg satisfy (�) (�+2) �(alg ) − �(alg ) ≥ " ( ) # √ √ 4Γ − 1 (4Γ − 1)( Γ − 1)( Γ − 1− �)� ′ ′ (�+1) � (4Γ +�+1+�) min , − (� − 1) · �(alg \ sol). 8Γ 16(1+ �)Γ (�+2) (�) (�) (�+2) Proof. It suices to lower bound�(alg )− �(alg ) for this value �of , since�(alg )− �(alg ) must be at least this large. After rescaling � appropriately, we have (�+1) (�+2) (�+1) ′ ′ ′ ′ ′ � (alg ) − � (alg ) ≤ � (alg ) ≤ (4Γ + � + 1+ �)� (sol), � � � ′ ′ because the algorithm can increase its cost with resp�ectby toat most (1+ �)� (sol) in any swap in the forward ′ ′ phase (by line 5 of GreedySwap, which bounds the increase � (�) ≤ � ≤ (1+ �)� (sol)), so it exceeds the ′ ′ ′ ′ threshold(4Γ +�)� ≤ (4Γ +�)(1+�)� (sol) on line 13 of DoubleApprox by at most this much. Then applying (�+1) (�+1) (�+2) (�) ′ Lemma 5.14 to �(alg ) − �(alg ) and Lemma 5.15 to �(alg ) − �(alg ) (using� ≤ 4Γ + � + 1+ �) � � � gives: (�+2) (�+1) (�+1) (�+2) (�) (�) �(alg ) − �(alg ) = [�(alg ) − �(alg )] + [�(alg ) − �(alg )] � � � � " ( ) # √ √ 4Γ − 1 (4Γ − 1)( Γ − 1)( Γ − 1− �)� ′ ′ (�+1) � (4Γ +�+1+�) ≥ min , − (� − 1) · �(alg \ sol). 8Γ 16(1+ �)Γ Now, we can chain Corollary 5.16 multiple times to show that after suiciently many iterations DoubleAp of - (�) (�) prox, if all intermediate values alg ofare poor diference approximations, over all these iterations �(alg ) � max � � � must decrease multiplicatively by more than , which is a contradiction as this is the ratio between an min � � � (�) upper and lower bound on the cost of every Steiner tree. In turn, some intermediate value algof must have been a good diference approximation: ′ ′ Lemma 5.17. Suppose Γ, Γ , �, and � are chosen such that for � as deined in Lemma 5.15, ( ) √ √ 4Γ − 1 (4Γ − 1)( Γ − 1)( Γ − 1− �)� ′ ′ � (4Γ +�+1+�) min , − (� − 1) > 0, 8Γ 16(1+ �)Γ and 0 < � < 2/3− 5/12Γ. Let � equal n √ √ o ′ ′ (4Γ−1)( Γ−1)( Γ−1−�)� 4Γ−1 � (4Γ +�+1+�) min , − (� − 1) 8Γ 16(1+�)Γ ′ ′ 4Γ−1 � (4Γ +�+1+�) 1+ (� − 1) 4Γ � max � � � ∗ Assume � > 0 and let � = 2(⌈log /log(1+ �)⌉ + 1). Then there exists some intermediate value alg assigned min� � � (�) ∗ ∗ ′ ∗ to alg by the algorithm for some� ≤ � and � such that �(alg \ sol) ≤ 4Γ�(sol \ alg ) and � (alg ) ≤ ′ ′ (4Γ + � + 1+ �)� (sol). ACM Trans. Algor. 28 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi (�) (�) Proof. Let Φ(�) := �(alg \ sol)− �(sol\ alg ) for even�. Assume that the lemma is false. Since algorithm (�) ′ ′ ′ DoubleApprox guarantees that � (alg ) ≤ (4Γ + � + 1 + �)� (sol), if the lemma is false it must be that (�) (�) for all �and �, �(alg \ sol) > 4Γ�(sol\ alg ). By Corollary 5.16, and the assumption on Γ, Γ , �, � in the � � (�) (�−2) statement of this lemma, for�all �(alg ) < �(alg ), so the while loop on Line 3DoubleAppro of x never (�) breaks. This means that for all ev�en≤ �, alg is assigned a valueDoubleAppro in x. We will show that this (�) (�) (�) 4Γ−1 implies that for the inal value algof, Φ(�) = �(alg \ sol) − �(sol\ alg ) < min � . The inequality � � 4Γ (�) (�) (�) (�) 4Γ−1 (�) �(alg \ sol) > 4Γ�(sol\ alg ) implies �(alg \ sol)− �(sol\ alg ) > �(alg \ sol). The value of 4Γ (�) (�) (�) �(alg \ sol) must be positive (otherwise �(alg \ sol) ≤ 4Γ�(sol\ alg ) trivially), and hence it must be at leastmin � . These two inequalities conlict, which implies a contradiction. Hence the lemma must be true. � � We now analyze how the quantityΦ(�) changes under the assumption that the lemma is false. Of course (�) (�+2) Φ(0) ≤ � max � . Lemma 5.12 gives thatΦ(�) − Φ(�+ 2) is exactly equal �to(alg ) − �(alg ). For the value � � (�) (�) of� such that � ∈ [�(alg \ sol),(1+ �)�(alg \ sol)], by Corollary 5.16 and the assumption that the lemma is false, for ev�en we have Φ(�) − Φ(�+ 2) " ( ) # √ √ 4Γ − 1 (4Γ − 1)( Γ − 1)( Γ − 1− �)� ′ ′ (�+1) � (4Γ +�+1+�) ≥ min , − (� − 1) · �(alg \ sol) 8Γ 16(1+ �)Γ " ( ) # √ √ 4Γ − 1 (4Γ − 1)( Γ − 1)( Γ − 1− �)� ′ ′ � (4Γ +�+1+�) ≥ min , − (� − 1) 8Γ 16(1+ �)Γ (�+1) (�+1) · [�(alg \ sol) − �(sol\ alg )]. (12) � � Lemma 5.15 (using the proof from Corollary 5.16 that � ≤ 4Γ + � + 1 + �), Lemma 5.12, and the inequality (�+1) (�+1) (�+1) 4Γ−1 �(alg \ sol) − �(sol\ alg ) > �(alg \ sol) give: � � � 4Γ (�+1) (�+1) Φ(�+ 2) − [�(alg \ sol) − �(sol\ alg )] � � ′ ′ � (4Γ +�+1+�) (�+1) ≤(� − 1)])�(alg \ sol) 4Γ − 1 ′ ′ (�+1) � (4Γ +�+1+�) (�+1) < (� − 1)])[�(alg \ sol) − �(sol\ alg )] 4Γ (�+1) (�+1) =⇒ [�(alg \ sol) − �(sol\ alg )] > Φ(�+ 2). � � ′ ′ 4Γ−1 � (4Γ +�+1+�) 1+ (� − 1)) 4Γ Plugging this into (12) gives: √ √ −1 ′ ′ (4Γ−1)( Γ−1)( Γ−1−�)� 4Γ−1 � (4Γ +�+1+�) min{ , }− (� − 1) 8Γ © 16(1+�)Γ ª Φ(�+ 2) < ­1+ ® Φ(�) 4Γ−1 � (4Γ+�+1+�) 1+ (� − 1) 4Γ « ¬ −1 = (1+ �) Φ(�). Applying this inductively gives: −�/2 −�/2 Φ(�) ≤ (1+ �) Φ(0) ≤ (1+ �) � max � . � max � � � −1 Plugging in �= � = 2(⌈log /log(1+ �)⌉ + 1) givesΦ(�) ≤ (1+ �) min� < min� as desired. □ � � � � min� � � ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 29 To prove Lemma 5.5, we now just need to certify that it suices to guess multiple values �, �of , and that the algorithm is eicient. ′ ′ ′ ′ Proof of Lemma 5.5. If we have� ∈ [�(sol),(1+ �) · �(sol)] and � ∈ [� (sol),(1+ �) · � (sol)], and the � � ′ values are multiples of � , then the conditions of DoubleApprox are met. As long as (10) holds, that is: ( ) √ √ 4Γ − 1 (4Γ − 1)( Γ − 1)( Γ − 1− �)� ′ ′ � (4Γ +�+1+�) min , − (� − 1) > 0, (10) 8Γ 16(1+ �)Γ then we have � > 0 in Lemma 5.17, thus giving the approximation guarantee in Lemma 5.5. For any p �ositiv , �,Γ , e ′ ′ ′ � (4Γ +�+1+�) there exists a suiciently large value Γ for of (10) to hold, since asΓ → ∞, we have � → 0,(� −1) → 0, n √ √ o (4Γ−1)( Γ−1)( Γ−1−�)� 4Γ−1 ′ and min , → min{1/2, �/(4+ 4�)}, so for any ixed choice �of , �,Γ , a suiciently 8Γ 16(1+�)Γ large value ofΓ causes � > 0 to hold as desired. � max � � � ⌈log ⌉ 1+� min � � � Some value in{min � ,(1+�) min � , . . .(1+�) min� } satisies the conditions�for , and there � � � � � � � max� � ⌈log ⌉ ′ ′ 1+� ′ min � are polynomially many values in this set. The same holds � in for{min� , . . .(1+ �) min� }. � � � � So we can run DoubleApprox for all pairs�of , � (paying a polynomial increase in runtime), and output the union of all outputs, giving the guarantee of Lemma 5.5 by Lemma 5.17. For�each we choose, we can round the � ′ edge costs to the nearest multiple of� before running DoubleApprox, and by Lemma 5.13 we only pay an additiv �e(�) in the approximation factor with resp�ect. Finally to , we note that by setting � appropriately in the statement of Lemma 5.17, we can achieve the approximation guarantee stated in Lemma 5.5 for a diferent value of�. Then, we just need to show DoubleApprox runs in polynomial time. Lemma 5.17 shows that the while loop of Line 3 only needs to be run a polynomial numb �) er of(times. The while loop for the forward phase runs at most � (� ) times since each callGreed to ySwap decreases the cost with respect to� by at least �, and 10� once the total decrease exceeds�/2 the while loop breaks. The while loop for the backward phase runs at most ′ ′ (� + 1+ �) times, since the initial cost with resp � is ect at tomost(4Γ + � + 1+ �)� , the while loop breaks ′ ′ ′ when it is less than Γ 4� , and each call toGreedySwap improves the cost by at least � . Lastly,GreedySwap can be run in polynomial time as the maximal � which need to be enumerated can be computed in polynomial time as described in Section 4. □ 5.5 Proofs of Lemmas 5.14 and 5.15 (�+1) (�+1) (�+1) Proof of Lemma 5.14. Let alg denote the value ofalg after�calls toGreedySwap on alg , and � � �,� (�+1) (�+1) (�) let� be the total number of calls of GreedySwap on alg . Then alg is the inal valuealg of , and the �,0 (�+1) (�+1) (�+1) inal value of alg isalg . Any timeGreedySwap is invoked on alg , by line 6 of DoubleApprox and � � �,� (�) the assumption� ≤ (1+ �)�(alg \ sol) in the lemma statement, we have: 1+ � (�+1) (�) (�) (�) �(alg ) > �(alg ) − �/2 ≥ �(alg ) − �(alg \ sol). (�) (�) Then, by Lemma 5.12 and the assumption�(alg \ sol) > 4Γ · �(sol\ alg ) in the lemma statement, we have: ACM Trans. Algor. 30 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi (�+1) (�+1) (�+1) �(alg \ sol) ≥ �(alg \ sol) − �(sol\ alg ) � � � (�+1) (�) (�) (�) = �(alg \ sol) − �(sol\ alg ) + �(alg ) − �(alg ) 1+ � (�) (�) (�) ≥ �(alg \ sol) − �(sol\ alg ) − �(alg \ sol) 1− � 1 (�) ≥ − �(alg \ sol), 2 4Γ (�+1) 2 1 For � < 2/3− 5/12Γ, �(alg \ sol)/� ≥ �. So by Lemma 5.10 GreedySwap never outputs a tuple where 10� �����= 1, and thus we can ignore lines 8-10DoubleAppro of x under the conditions in the lemma statement. (�+1) (�+1) (�) Suppose alg satisies �(alg ) ≤ �(alg ) − �/2, a condition that causes the while loop at line 6 of �,� �,� DoubleApprox to exit and the forward phase to end. Then (�+1) (�) (�) �(alg ) − �(alg ) ≥ �/2 ≥ �(alg \ sol) �,� (�) (�) ≥ [�(alg \ sol) − �(sol\ alg )] (�+1) (�+1) = [�(alg \ sol) − �(sol\ alg )] �,0 �,0 (�+1) (�+1) ≥ [�(alg \ sol) − �(sol\ alg )] �,� �,� 4Γ − 1 (�+1) ≥ �(alg \ sol). �,� 8Γ (�+1) (�+1) The second-to-last inequality is using Lemma 5.12, which �implies (alg \ sol) − �(sol \ alg ) is �,� �,� (�+1) (�+1) decreasing with swaps, and the last inequality holds by the assumption �(alg \ sol) > 4Γ · �(sol\ alg ) � � (�+1) (�) in the lemma statement. Thus �if(alg ) ≤ �(alg ) − �/2, the lemma holds. �,� (�+1) (�) Now assume instead that�(alg ) > �(alg )− �/2 when the forward phase ends. We want a lower bound �,� on �−1 (�+1) (�+1) (�+1) (�+1) �(alg ) − �(alg ) = [�(alg ) − �(alg )]. �,0 �,� �,� �,�+1 �=0 (�+1) (�+1) We bound each �(alg ) − �(alg ) term using Lemma 5.10 and Lemma 5.11. By Lemma 5.10 and the �,� �,�+1 (�+1) (�+1) assumption in the lemma statement that �(alg \ sol) > 4Γ · �(sol\ alg ), we know there exists a swap � � (�+1) between � ∈ alg and � ∈ sol such that �,� (�+1) ′ ′ � (sol\ alg ) (1+ �)� (�) − � (�) 4(1+ �)Γ �,� ≤ · . √ √ (�+1) �(�) − (1+ �)�(�) ( Γ − 1)( Γ − 1− �) �(alg \ sol) �,� ′ ′ ′ By Lemma 5.11, we know that when � is set to a value in [� (�),(1+ �) · � (�)] in line 2Greed of ySwap, the ′ ′ ′ ′ ′ algorithm inds a path � between the endpoints of � such that �(� ) ≤ (1+ �)�(�) and � (� ) ≤ (1+ �)� (�). ′ ∗ ∗ Thus (�, � ) ∈ ����� and the swap (� , � ) chosen by the ( �+ 1)th call toGreedySwap satisies: ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 31 ′ ∗ ′ ∗ ′ ′ ′ ′ ′ � (� ) − � (� ) � (� ) − � (�) (1+ �)� (�) − � (�) ≤ ≤ ≤ ∗ ∗ �(� ) − �(� ) �(�) − �(�) �(�) − (1+ �)�(�) (�+1) � (sol\ alg ) 4(1+ �)Γ �,� √ √ · . (�+1) ( Γ − 1)( Γ − 1− �) �(alg \ sol) �,� (�+1) ′ ′ Rearranging terms and observing that� (sol) ≥ � (sol\ alg ) gives: �,� (�+1) (�+1) ∗ ∗ �(alg ) − �(alg ) = �(� ) − �(� ) �,� �,�+1 √ √ ′ ∗ ′ ∗ ( Γ − 1)( Γ − 1− �) � (� ) − � (� ) (�+1) ≥ · �(alg \ sol) �,� 4(1+ �)Γ � (sol) √ √ (�+1) (�+1) ′ ′ � (alg ) − � (alg ) ( Γ − 1)( Γ − 1− �) �,�+1 �,� (�+1) = · �(alg \ sol) . �,� 4(1+ �)Γ � (sol) This in turn gives: ACM Trans. Algor. 32 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi �−1 (�+1) (�+1) (�+1) (�+1) �(alg ) − �(alg ) = [�(alg ) − �(alg )] �,0 �,� �,� �,�+1 �=0 √ √ �−1 ( Γ − 1)( Γ − 1− �) (�+1) ≥ · �(alg \ sol) �,� 4(1+ �)Γ �=0 (�+1) (�+1) ′ ′ � (alg ) − � (alg ) �,� �,�+1 � (sol) √ √ �−1 ( Γ − 1)( Γ − 1− �) (�+1) (�+1) ≥ [�(alg \ sol) − �(sol\ alg )] �,� �,� 4(1+ �)Γ �=0 (�+1) (�+1) ′ ′ � (alg ) − � (alg ) �,�+1 �,� � (sol) √ √ �−1 ( Γ − 1)( Γ − 1− �) (�+1) (�+1) ≥ [�(alg \ sol) − �(sol\ alg )] �,� �,� 4(1+ �)Γ �=0 (�+1) (�+1) ′ ′ � (alg ) − � (alg ) �,�+1 �,� � (sol) √ √ (4Γ − 1)( Γ − 1)( Γ − 1− �) (�+1) ≥ �(alg \ sol) 2 �,� 16(1+ �)Γ (�+1) (�+1) ′ ′ �−1 � (alg ) − � (alg ) �,�+1 �,� � (sol) �=0 √ √ (4Γ − 1)( Γ − 1)( Γ − 1− �) (�+1) = �(alg \ sol) 2 �,� 16(1+ �)Γ (�+1) (�+1) ′ ′ � (alg ) − � (alg ) �,� �,0 � (sol) √ √ (4Γ − 1)( Γ − 1)( Γ − 1− �)� (�+1) ≥ �(alg \ sol). 2 �,� 16(1+ �)Γ (�+1) (�+1) The third-to-last inequality is using Lemma 5.12, which �(implies alg \ sol)− �(sol\ alg ) is decreasing �,� �,� (�+1) (�+1) with swaps. The second-to-last inequality is using the assumption �(alg \ sol) > 4Γ· �(sol\ alg ) in the � � statement the lemma. The last inequality uses the fact that the while loop on line DoubleAppro 6 of x terminates (�+1) (�+1) ′ ′ ′ (�) because � (alg ) > (4Γ + �)� (by the assumption that�(alg ) > �(alg ) − �/2), and lines 2 and 13 of �,� �,� (�+1) ′ ′ ′ DoubleApprox give that� (alg ) ≤ 4Γ � . □ �,0 (�+2) ′ ′ ′ ′ ′ Proof of Lemma 5.15. Because � (alg ) > 4Γ � in every backwards phase and� ≥ � (sol), by Lemma 5.10 (�+2) whenever GreedySwap is called on alg in line 14DoubleAppro of x, at least one swap is possible. Since all ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 33 � ′ � ′ edge costs are multiples of� , and the last argument toGreedySwap is � (which lower bounds the decrease � � (�+2) in� (alg ) due to any improving swap), GreedySwap always makes a swap. (�+2) (�+2) (�+2) Let alg denote the value ofalg after�calls toGreedySwap on alg , and let� be the total number �,� (�+2) (�+2) (�+1) (�+2) of calls of GreedySwap on alg . Then alg is the inal valuealg of and the inal value of alg is �,0 (�+2) alg . We want to show that �,� �−1 (�+2) (�+2) (�+2) (�+2) (�+2) � � �(alg ) − �(alg ) = [�(alg ) − �(alg )] ≤ (� − 1)�(alg \ sol). �,� �,� �,0 �,�+1 �,0 �=0 (�+2) (�+2) We bound each �(alg ) − �(alg ) term using Lemma 5.10 and Lemma 5.11. Since in a backward phase �+1 � (�+2) (�+2) ′ ′ ′ we have � (alg ) > 4Γ � (sol), by Lemma 5.10 we know there exists a swap between� ∈ alg and � ∈ sol �,� such that (�+2) �(sol\ alg ) (1+ �)�(�) − �(�) 4(1+ �)Γ �,� ≤ √ √ · . ′ ′ (�+2) ′ ′ ′ � (�) − (1+ �)� (�) ( Γ − 1)( Γ − 1− �) � (alg \ sol) �,� By Lemma 5.11, we know that when � is set to the value in [�(�),(1+ �)· �(�)] in line 2Greed of ySwap, the ′ ′ ′ ′ ′ algorithm inds a path � between the endpoints of � such that � (� ) ≤ (1+ �)� (�) and �(� ) ≤ (1+ �)�(�). ′ ∗ ∗ Thus (�, � ) ∈ ����� and we get that the swap (� , � ) chosen by the ( �+ 1)th call toGreedySwap satisies: ∗ ∗ ′ �(� ) − �(� ) �(� ) − �(�) (1+ �)�(�) − �(�) ≤ ≤ ′ ∗ ′ ∗ ′ ′ ′ ′ � (� ) − � (� ) � (�) − � (�) � (�) − (1+ �)� (�) (�+2) �(sol\ alg ) 4(1+ �)Γ �,� ≤ √ √ · (�+2) ′ ′ ( Γ − 1)( Γ − 1− �) � (alg \ sol) �,� (�+2) �(alg \ sol) 4(1+ �)Γ �,� ≤ · . √ √ ′ ′ ′ � (sol) ( Γ − 1)( Γ − 1− �)(4Γ − 1)(4Γ) (�+2) (�+2) The last inequality is derived using the assumption �(alg \ sol) > 4Γ · �(sol\ alg ) in the statement � � (�+2) (�+2) ′ ′ ′ of the lemma, as well as the fact that for �all < �, � (alg ) ≥ 4Γ� (sol) =⇒ � (alg \ sol) ≥ �,� �,� ACM Trans. Algor. 34 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi (�+2) ′ ′ ′ ′ � (alg ) − � (sol) ≥ (4Γ − 1)� (sol). This in turn gives: �,� �−1 (�+2) (�+2) (�+2) (�+2) �(alg ) − �(alg ) = [�(alg ) − �(alg )] �,� �,0 �,�+1 �,� �=0 (�+2) (�+2) �−1 �(alg ) − �(alg ) �,�+1 �,� (�+2) (�+2) ′ ′ = · [� (alg ) − � (alg )] �,� �,�+1 (�+2) (�+2) ′ ′ � (alg ) − � (alg ) �=0 �,� �,�+1 �−1 4(1+ �)Γ (�+2) ≤ √ √ [�(alg \ sol)] �,� ′ ′ ′ ( Γ − 1)( Γ − 1− �)(4Γ − 1)(4Γ) �=0 (�+2) (�+2) ′ ′ � (alg ) − � (alg ) �,� �,�+1 � (sol) 4(1+ �)Γ √ √ ′ ′ ′ ( Γ − 1)( Γ − 1− �)(4Γ − 1)(4Γ − 1) �−1 (�+2) (�+2) · [�(alg \ sol) − �(sol\ alg )] �,� �,� �=0 (�+2) (�+2) ′ ′ � (alg ) − � (alg ) �,� �,�+1 � (sol) �−1 (�+2) (�+2) = � [�(alg \ sol) − �(sol\ alg )] (13) �,� �,� �=0 (�+2) (�+2) ′ ′ � (alg ) − � (alg ) �,� �,�+1 · . (14) � (sol) (�+2) (�+2) The last inequality is proved using the assumption �(alg \ sol) > 4Γ · �(sol\ alg ) in the statement of � � the lemma, which implies 4Γ 1 (�+2) (�+2) (�+2) �(alg \ sol) = �(alg \ sol) − �(alg \ sol) �,� �,� �,� 4Γ − 1 4Γ − 1 4Γ 4Γ (�+2) (�+2) < �(alg \ sol) − �(sol\ alg ). �,� �,� 4Γ − 1 4Γ − 1 It now suices to show (�+2) (�+2) ′ ′ �−1 � (alg ) − � (alg ) �,� �,�+1 (�+2) (�+2) [�(alg \ sol) − �(sol\ alg )] · ≤ �,� �,� ′ � (sol) �=0 � � � − 1 (�+2) �(alg \ sol). �,0 To do so, we view the series of swaps as occurring over a continuous timeline, wher �= 0e, 1for , . . . �− 1 the (�+2) (�+2) ′ ′ � (alg )−� (alg ) Í Í �,� �,�+1 ′ ′ ( �+ 1)th swap takes time�( �) = , i.e., occurs from time′ �( �) to time ′ �( �). The � <� � ≤ � � (sol) ′ ′ total time taken to perform all swaps in the sum is the total decr�ease acrin oss all swaps, divided�by(sol), ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 35 (�+2) (�+2) i.e., exactly� . Using this deinition of time Φ(�), let denote �(alg \ sol)− �(sol\ alg ) for the value of � �,� �,� Í Í ′ ′ satisfying Φ(�) ∈ [ ′ �( �), ′ �( �)). Using this deinition, we get: � <� � ≤ � (�+2) (�+2) �−1 ′ ′ →� ︁ � (alg ) − � (alg ) �,� �,�+1 (�+2) (�+2) [�(alg \ sol) − �(sol\ alg )] · = Φ(�) ��. �,� �,� � (sol) �=0 (�+2) � � We conclude by claiming Φ(�) ≤ � �(alg \ sol). Given this claim, we get: �,0 ∫ ∫ →� →� � � ′ � − 1 (�+2) � � (�+2) Φ(�) ��≤ �(alg \ sol) � �� = �(alg \ sol). �,0 �,0 0 0 Which completes the proof of the lemma. We now focus on proving the claim.Φ(Since �) is ixed in the Í Í Í ′ ′ ′ interval[ ′ �( �), ′ �( �)), it suices to prove the claim only�for which are equal to ′ �( �) for � <� � ≤ � � <� some �, so we proceed by induction on �. The claim clearly holds �for = 0 since ′ �( �) = 0 and Φ(0) = � <0 (�+2) (�+2) (�+2) �(alg \ sol) − �(sol\ alg ) ≤ �(alg \ sol). �,0 �,0 �,0 Í ′ ′ (�+2) ′ ′ ′ � � ′′ ′ Assume that for� = ′ �( �), we have Φ(�) ≤ � �(alg \ sol). For � = � + �( �), by induction we � <� �,0 ′′ � �( �) ′ can prove the claim by showing Φ(� ) ≤ � Φ(�). To show this, we consider the quantity (�+2) (�+2) (�+2) (�+2) ′′ ′ Φ(� ) − Φ(�) = [�(alg \ sol) − �(sol\ alg )] − [�(alg \ sol) − �(sol\ alg )] �,�+1 �,�+1 �,� �,� (�+2) (�+2) (�+2) (�+2) = [�(alg \ sol) − �(alg \ sol)] + [�(sol\ alg ) − �(sol\ alg )]. �,�+1 �,� �,� �,�+1 By Lemma 5.12 and reusing the bound in (14), we have: (�+2) (�+2) ′′ ′ Φ(� ) − Φ(�) = �(alg ) − �(alg ) �,�+1 �,� (�+2) (�+2) [�(alg \ sol) − �(sol\ alg )] �,� �,� (�+2) (�+2) ′ ′ ′ ≤ � [� (alg ) − � (alg )] �,� �,�+1 � (sol) ′ (�+2) (�+2) ′ ′ = � · [�(alg \ sol) − �(sol\ alg )] · �( �) = � · Φ(�) · �( �). �,� �,� Rearranging terms we have: ′′ ′ ′ � �( �) ′ Φ(� ) ≤ (1+ � · �( �)) Φ(�) ≤ � Φ(�), where we use the inequality 1+ � ≤ � . This completes the proof of the claim. □ 6 HARDNESS RESULTS FOR ROBUST PROBLEMS We give the following general hardness result for a family of problems that includes many graph optimization problems: Theorem 6.1. Let P be any robust covering problem whose input includes a weighted graph � where the lengths � of the edges are given as ranges[ℓ ,� ] and for which the non-robust version of the problem,P , has the following � � � properties: • A solution to an instance ofP can be written as a (multi-)set� of edges in� , and has cost � . �∈� � • Given an input including � to P , there is a polynomial-time approximation-preserving reduction from solving ′ ′ ′ ′ P on this input to solvingP on some input including � , where � is the graph formed by taking� , adding ∗ ∗ a new vertex � , and adding a single edge from� to some � ∈ � of weight 0. ACM Trans. Algor. 36 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi • For any input including � to P , given any spanning tre� e of � , there exists a feasible solution only including edges from � . Then, if there exists a polynomial time(�, �)-robust algorithm forP, there exists a polynomial-time �-approximation algorithm forP. Before proving Theorem 6.1, we note that robust traveling salesman and robust Steiner tree are examples of problems that Theorem 6.1 implicitly gives lower bounds for. For both problems, the irst property clearly holds. For traveling salesman, given any input � , any solution to the problem on input � as described in Theorem 6.1 ∗ ∗ can be turned into a solution of the same cost on input � by removing the new vertex� (since� was distance 0 from�, removing� does not afect the length of any tour), giving the second property. For any spanning tree of � , a walk on the spanning tree gives a valid TSP tour, giving the third property. For Steiner tree, for the input with graph � and the same terminal set, for any solution containing the edge (�, �) we can remove this edge and get a solution for the input with�graph that is feasible and of the same cost. Otherwise, the solution is already a solution for the input with � that graph is feasible and of the same cost, so the second property holds. Any spanning tree is a feasible Steiner tree, giving the third property. We now give the proof of Theorem 6.1. Proof of Theorem 6.1. Suppose there exists a polynomial time (�, �)-robust algorithm � forP. The �- approximation algorithmPfor is as follows: (1) From the input instanceI ofP where the graph is� , use the approximation-preserving reduction (that ′ ′ ′ must exist by the second property of the theorem) to construct instance I ofP where the graph is� . ′′ ′ ′ (2) Construct an instanceI ofP fromI as follows: For all edges�in , their length is ixed to their length ′ ∗ 6 inI . In addition, we add a “specialž edge fr�om to all vertices besides � with length range[0,∞] . ′′ ′ (3) Run � onI to get a solution sol. Treat this solution as a solutionI to (we will show it only uses edges that appear inI). Use the approximation-preserving reduction to conv sol ertinto a solution for I and output this solution. Let � denote the cost of the optimal solutionI to . Then, mr ≤ � . To see why, note that the optimal solution toI has cost � in all realizations of demands since it only uses edges of ixed cost, and thus its regret is at most � . This also implies that for d, opt all(d) is inite. Then fordall , sol(d) ≤ � · opt(d)+ �· mr, i.esol . (d) is inite in all realizations of demands, sol so does not include any special edges, as any solution with a special edge has ininite cost in some realization of demands. Now consider the realization of demands d where all special edges have length 0. The special edges and the ∗ ′ ′ edge (�, �) span � , so by the third property ofP in the theorem statement there is a solution using only 0 cost edges in this realization, opti.e(d). = 0. Then in this realization, sol(d) ≤ � · opt(d) + � · mr ≤ � · � . But since sol does not include any special edges, and all edges besides special edges have ixed cost and their cost is the ′′ ′ ′ ′ same inI as inI , sol(d) also is the cost of sol in instanceI , i.esol . (d) is a�-approximation for I . Since the reduction fromI toI is approximation-preserving, we also�get -appr a oximation for I. □ From [10, 15] we then get the following hardness results: Corollary 6.2. Finding an(�, �)-robust solution for Steiner tree where� < 96/95 is NP-hard. Corollary 6.3. Finding an(�, �)-robust solution for TSP where� < 121/120 is NP-hard. ∞ is used to simplify the proof, but can be replaced with a suiciently large inite number. For example, the total weight�of all edges in suices and has small bit complexity. ACM Trans. Algor. Robust Algorithms for TSP and Steiner Tree • 37 7 CONCLUSION In this paper, we designed constant approximation algorithms for the robust Steiner stt)trand ee (traveling salesman problemstsp ( ). More precisely, our algorithms take as input a range of possible edge lengths in a graph and obtain a single solution for the problem at hand that can be compared to the optimal solution for any realization of edge lengths in the given ranges. While our approximationtsp bounds are small for constants, that forstt are very large constants. A natural question is whether these constants can be made smaller, e.g., of the same scale as classic approximation bounds stt for . While we did not seek to optimize our constants, obtaining truly small constants for stt appears to be beyond our techniques, and is an interesting open question. More generally, robust algorithms are a key component in the area of optimization under uncertainty that is of much practical and theoretical signiicance. Indeed, as mentioned in our survey of related work, several diferent models of robust algorithms have been considered in the literature. Optimizing over input ranges is one of the most natural models in robust optimization, but has been restricted in the past to polynomial-time solvable problems because of deinitional limitations. We circumvent this by setting regret minimization as our goal, and creating (�, �)the -approximation framework, which then allows us to consider a large variety of interesting combinatorial optimization problems in this setting. We hope that our work will lead to more research in robust algorithms for other fundamental problems in combinatorial optimization, particularly in algorithmic graph theory. ACKNOWLEDGMENTS Arun Ganesh was supported in part by NSF Award CCF-1535989. Bruce M. Maggs was supported in part by NSF Award CCF-1535972. Debmalya Panigrahi was supported in part by NSF grants CCF-1535972, CCF-1955703, an NSF CAREER Award CCF-1750140, and the Indo-US Virtual Networked Joint Center on Algorithms under Uncertainty. REFERENCES [1] H. Aissi, C. Bazgan, and D. Vanderpooten. 2008. Complexity of the minśmax (regret) versions of min cut prDiscr oblems. ete Optimization 5, 1 (2008), 66 ś 73. [2] Hassene Aissi, Cristina Bazgan, and Daniel Vanderpooten. 2009. Minśmax and minśmax regret versions of combinatorial optimization problems: A survey.European Journal of Operational Research197, 2 (2009), 427 ś 438. https://doi.org/10.1016/j.ejor.2008.09.012 [3] Igor Averbakh. 2001. On the complexity of a class of combinatorial optimization problems withMathematical uncertainty. Programming 90, 2 (01 Apr 2001), 263ś272. [4] Igor Averbakh. 2005. The Minmax Relative Regret Median Problem on Networks. INFORMS Journal on Computing17, 4 (2005), 451ś461. [5] I. Averbakh and Oded Berman. 1997. Minimax regret p-center location on a network with demand uncertainty Location . Science5, 4 (1997), 247 ś 254. https://doi.org/10.1016/S0966-8349(98)00033-3 [6] Igor Averbakh and Oded Berman. 2000. Minmax Regret Median Location on a Network Under Uncertainty INFORMS . Journal on Computing12, 2 (2000), 104ś110. https://doi.org/10.1287/ijoc.12.2.104.11897 arXiv:https://doi.org/10.1287/ijoc.12.2.104.11897 [7] Dimitris Bertsimas and Melvyn Sim. 2003. Robust discrete optimization and netw Mathematical ork lows. Programming98, 1 (01 Sep 2003), 49ś71. https://doi.org/10.1007/s10107-003-0396-4 [8] Jaroslaw Byrka, Fabrizio Grandoni, Thomas Rothvoß, and Laura Sanità. 2010. An improved LP-based approximation for Steiner tree. In Proceedings of the 42nd ACM Symposium on Theory of Computing, STOC 2010, Cambridge, Massachusetts, USA, 5-8 June 2010 . 583ś592. [9] André Chassein and Marc Goerigk. 2015. On the recoverable robust traveling salesman problem. Optimization Letters 10 (09 2015). https://doi.org/10.1007/s11590-015-0949-5 [10] Miroslav Chlebík and Janka Chlebíková. 2002. Approximation Hardness of the Steiner Tree Problem on Graphs. Algorithm In Theory Ð SWAT 2002, Martti Penttonen and Erik Meineche Schmidt (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 170ś179. [11] Eduardo Conde. 2012. On a constant factor approximation for minmax regret problems using a symmetry point scenario European. Journal of Operational Research219, 2 (2012), 452 ś 457. https://doi.org/10.1016/j.ejor.2012.01.005 [12] Kedar Dhamdhere, Vineet Goyal, R. Ravi, and Mohit Singh. 2005. How to Pay, Come What May: Approximation Algorithms for Demand-Robust Covering Problems. In 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2005), 23-25 October 2005, Pittsburgh, PA, USA, Proceedings . 367ś378. ACM Trans. Algor. 38 • Arun Ganesh, Bruce M. Maggs, and Debmalya Panigrahi [13] Martin Groß, Anupam Gupta, Amit Kumar, Jannik Matuschke, Daniel R. Schmidt, Melanie Schmidt, and José Verschae. 2018. A Local-Search Algorithm for Steiner Forest. 9thInInnovations in Theoretical Computer Science Conference, ITCS 2018, January 11-14, 2018, Cambridge, MA, USA. 31:1ś31:17. https://doi.org/10.4230/LIPIcs.ITCS.2018.31 [14] Masahiro Inuiguchi and Masatoshi Sakawa. 1995. Minimax regret solution to linear programming problems with an interval objective function.European Journal of Operational Research86, 3 (1995), 526 ś 536. https://doi.org/10.1016/0377-2217(94)00092-Q [15] Marek Karpinski, Michael Lampis, and Richard Schmied. 2013. New Inapproximability Bounds Algorithms for TSP. In and Computation , Leizhen Cai, Siu-Wing Cheng, and Tak-Wah Lam (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 568ś578. [16] Adam Kasperski and PawełZieliński. 2006. An Approximation Algorithm for Interval Data Minmax Regret Combinatorial Optimization Problems.Inf. Process. Lett. 97, 5 (March 2006), 177ś180. https://doi.org/10.1016/j.ipl.2005.11.001 [17] Adam Kasperski and Pawel Zieliński. 2007. On the existence of an FPTAS for minmax regret combinatorial optimization problems with interval data.Oper. Res. Lett. 35 (2007), 525ś532. [18] P. Kouvelis and G. Yu. 1996.Robust Discrete Optimization and Its Applications . Springer US. [19] Panos Kouvelis and Gang Yu. 1997. Robust 1-Median Location Problems: Dynamic Aspects and Uncertainty . Springer US, Boston, MA, 193ś240. https://doi.org/10.1007/978-1-4757-2620-6_6 [20] Helmut E. Mausser and Manuel Laguna. 1998. A new mixed integer formulation for the maximum regret prInternational oblem. Transactions in Operational Research 5, 5 (1998), 389 ś 403. https://doi.org/10.1016/S0969-6016(98)00023-9 [21] V. Vazirani. 2001. Approximation algorithms . Springer-Verlag, Berlin. [22] Jens Vygen. [n.d.]. New approximation algorithms for the TSP. [23] Laurence A. Wolsey. 1980. Heuristic analysis, linear programming and branch and bound. Combinatorial In Optimization , VII. J. Rayward-Smith (Ed.). Springer Berlin Heidelberg, Berlin, Heidelberg, 121ś134. https://doi.org/10.1007/BFb0120913 [24] H. Yaman, O. E. Karaşan, and M. Ç. Pinar. 2001. The robust spanning tree problem with intervalOp data. erations Research Letters29, 1 (2001), 31 ś 40. [25] P. Zieliński. 2004. The computational complexity of the relative robust shortest path problem with inter European val Journal data. of Operational Research158, 3 (2004), 570 ś 576. ACM Trans. Algor.

Journal

ACM Transactions on Algorithms (TALG)Association for Computing Machinery

Published: Mar 9, 2023

Keywords: Steiner tree

There are no references for this article.