Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Nine-Point Iterated Rectangle Dichotomy for Finding All Local Minima of Unknown Bounded Surface

Nine-Point Iterated Rectangle Dichotomy for Finding All Local Minima of Unknown Bounded Surface Applied Computer Systems ISSN 2255-8691 (online) ISSN 2255-8683 (print) December 2022, vol. 27, no. 2, pp. 89–100 https://doi.org/10.2478/acss-2022-0010 https://content.sciendo.com Nine-Point Iterated Rectangle Dichotomy for Finding All Local Minima of Unknown Bounded Surface Vadim Romanuke Polish Naval Academy, Gdynia, Poland Abstract – A method is suggested to find all local minima and the a numerical method, but this way is impracticable if computing global minimum of an unknown two-variable function bounded on function values either takes unreasonably long time or is too a given rectangle regardless of the rectangle area. The method has expensive [2], [3], [7]. Thus, no numerical method is applicable eight inputs: five inputs defined straightforwardly and three inputs, as there is no object (i. e., tabulated function) to which it might which are adjustable. The endpoints of the initial intervals be applied. constituting the rectangle and a formula for evaluating the two- It is obvious that finding minima of a one-variable function variable function at any point of this rectangle are the is easier than that of two-variable function. The existing straightforward inputs. The three adjustable inputs are a tolerance with the minimal and maximal numbers of subintervals along each methods of finding minimum without involving the one- dimension. The tolerance is the secondary adjustable input. Having variable function equation and function tiny-step evaluation broken the initial rectangle into a set of subrectangles, the nine- rely on successively narrowing the range of values on the point iterated rectangle dichotomy “gropes” around every local specified interval [8], [9]. The most efficient and robust minimum by successively cutting off 75 % of the subrectangle area methods are the method of golden-section search [10], the or dividing the subrectangle in four. A range of subrectangle sets Fibonacci search technique [11], and the ternary search defined by the minimal and maximal numbers of subintervals along each dimension is covered by running the nine-point rectangle algorithm [12], [13]. dichotomy on every set of subrectangles. As a set of values of The golden-section search maintains the function values for currently found local minima points changes no more than by the four points whose three interval widths are in the golden ratio tolerance, the set of local minimum points and the respective set of [10], [14], [15]. These ratios are maintained for each iteration minimum values of the surface are returned. The presented and are maximally efficient. For a strictly unimodal function approach is applicable to whichever task of finding local extrema with an extremum inside the interval, the golden-section search is. If primarily the purpose is to find all local maxima or the global will find that extremum, while for an interval containing maximum of the two-variable function, the presented approach is applied to the function taken with the negative sign. The presented multiple extrema (possibly including the interval boundaries), approach is a significant and important contribution to the field of it will converge to one of them. This is an obvious demerit of numerical estimation and approximate analysis. Although the the method because one cannot be sure that an interval contains method does not assure obtaining all local minima (or maxima) for a single minimum and so other minima are just omitted. The any two-variable function, setting appropriate minimal and golden-section search can be used for finding a minimum of a maximal numbers of subintervals makes missing some minima (or two-variable function as well by using the same iteration maxima) very unlikely. technique. Keywords – Finding extrema, local minima, rectangle The Fibonacci search technique derived from the golden- dichotomy, subrectangles, unknown two-variable function. section search is a similar algorithm to find the extremum (minimum or maximum) of a finite sequence of values that has I. PRACTICAL ISSUES OF FINDING A MINIMUM a single local minimum or local maximum [11], [16]. The algorithm maintains a bracketing of the solution in which the When a function (of one, two, or more variables) is given in length of the bracketed interval is a Fibonacci number. its equation or a set of equations (in the “curly bracket” form, Obviously, the Fibonacci search technique may also fail in subdomain-wise), finding the global minimum of the function determining the global minimum if there are multiple minima is an academic task fulfilled either by algebraic (symbolic) or on an interval. numerical methods [1], [2]. In most practical problems, the A ternary search aims at finding the minimum of a unimodal function is unknown (i. e., its equation representation is function (of one, two, or more variables) [12]. It determines uncertain or unknown), so its derivatives are not available and either that the minimum cannot be in the first third of the domain neither a symbolic approach nor a numerical method is or that it cannot be in the last third of the domain, and then applicable [3], [4]. Another practical issue arises when a repeats on the remaining two thirds. In the case of a one-variable symbolic (exact) approach cannot be applied because an function, the ternary search is slightly faster that the golden- equation contains analytically nondifferentiable parts [5], [6]. section search, but it also may omit the global minimum. Then the global minimum can be determined approximately by Corresponding author’s e-mail: romanukevadimv@gmail.com ©2022 Vadim Romanuke. This is an open access article licensed under the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0). Applied Computer Systems _________________________________________________________________________________________________2022/27 The global minimum can be found by the genetic algorithm algorithms, the genetic algorithm is more reliable as omitting [17], [18]. The genetic algorithm based on imitating a biological the global minimum is less likely (when the function is not evolution with selection and crossover of solution candidates is unimodal [19], [20]) – see an example in Fig. 1, where the far more reliable in determining the minimum. Although being global minimum is to be found for a two-variable function far slower than the golden-section search and ternary search (surface) π ππ      f xy , 1.2 sin 0.28x cos 0.238 y−+ sin 0.8x− cos 0.58 y−+ ( ) ( )       3  4 6 22 2 2 π π    0.0004⋅+ ( x 7) 0.0004⋅− ( xy 8) 0.0006⋅( +6) 0.0006⋅( y−11) (1) +2 sin 0.33x− cos 0.22 ye −+ + e + e + e       f xy , ( ) the global minimum found by the genetic algorithm the local minimum found by the golden-section search and ternary search algorithms Fig. 1. An example of when both the golden-section search and ternary search find a local minimum and omit the global minimum, whereas the global minimum is found by the genetic algorithm (the surface value axis is shown in reverse direction for easier observation of minima). the equation of the function and the rectangle) that need to be defined on rectangle specified. However, often even the genetic algorithm by −7; 8 × −6; 11 (2) [ ] [ ] accurately specified parameters fails to find the global minimum. A counterexample is shown in Fig. 2, where the by x∈ −7; 8 , y∈ −6; 11 . The genetic algorithm used for this [ ] [ ] global minimum is to be found for a surface (a slight example is run by a series of its input parameters (apart from modification of that in Fig. 1) π ππ      f ( xy , ) 1.2 sin( 0.28x) cos 0.238 y−+ sin 0.8x− cos 0.58 y−+      3 4 6      22 2 2 π π    0.0004⋅+ ( x 9) 0.0004⋅− ( x 9) 0.0006⋅( y+9) 0.0006⋅( y−11) +2 sin 0.33x− cos 0.22 ye −+ + e + e + e (3)     54   defined on rectangle solution candidates. In general, genetic algorithms converge slower as the problem size grows (the rectangle area, in this [−9; 9]×[−9; 11] (4) particular case). Therefore, if the function evaluation is expensive (in the sense of either computational time or a factual by x∈[−9; 9] , y∈[−9; 11] . Besides, the genetic algorithm is cost to evaluate a value of the function), the genetic algorithm far slower than both the golden-section search and ternary is practically inappropriate (unacceptable). search algorithms as it must operate on rather a great deal of = Applied Computer Systems _________________________________________________________________________________________________2022/27 the local minimum found by the genetic algorithm, and by the golden-section search and ternary search algorithms the global minimum “unseen” by the algorithms the local minimum “unseen” by the algorithms f ( xy , ) Fig. 2. An example of when none of the algorithms “sees” the global minimum, but each of the algorithms found the same local minimum omitting another local minimum whose surface value is greater (the surface value axis is shown in reverse direction for easier observation of minima). into narrower subrectangles. To achieve the goal, the following Apart from the need to determine the global minimum, often objectives are to be accomplished: all local minima of an unknown surface are required to be 1. To impose specific conditions on the surface and its found. However, the mentioned methods allow determining (or extrema on the bounded rectangle. locating) only one minimum on a rectangle, and this minimum 2. To suggest a method (algorithm #1) for determining a may be not global (Fig. 2). To find all local minima on a local minimum on the rectangle or returning a specific answer rectangle, this rectangle should be broken into a subset of implying that the rectangle, apart from the local minimum, subrectangles, on each of which no more than a single local contains other extrema. minimum is supposed to be. Obviously, this supposition is not 3. To suggest a method (algorithm #2) of breaking the initial always true, so some local minima including the global one may rectangle into a set of subrectangles, for each of which a local be lost. An example is the problem of fine-tuning minimum is found or the specific answer is returned by hyperparameters and training parameters of neural networks algorithm #1. Thus, algorithm #1 is to be incorporated into [21], [22]. An objective function is unknown and its evaluation algorithm #2. is either time-consuming or resource-consuming, or both [23]. 4. To suggest a method (algorithm #3) of adjusting Thus, neither exact methods nor numerical methods can be algorithm #2 so that all the local minima would be determined efficiently applied to find (quasi-)optimal parameters. In with an acceptable accuracy (tolerance) and no specific answer addition, knowing all local minima can be useful to form a set would be returned by algorithm #1. Algorithm #3 should of (quasi-)optimal parameters, because often the objective incorporate algorithm #2 to be a novel approach of finding local function value at the global minimum is not significantly lower minima of a surface. than the objective function value at a local minimum, and the 5. To exemplify the suggested approach. local minimum point may have more appropriate components 6. To discuss applicability and significance of the approach. to use them in practice. 7. To make an appropriate conclusion on it. OAL AND OBJECTIVES TO BE ACCOMPLISHED II. G III. SURFACE AND ITS EXTREMA ON A BOUNDED RECTANGLE As the existing approaches are incapable of determining all local minima of an unknown two-variable function (surface) Based on the practical experience, it is naturally assumed that a surface f(x, y) defined on rectangle within any bounded rectangle, the goal is to develop a method by which they could be efficiently found regardless of the by ba > and dc > (5) [ab ;; ]×[c d] rectangle area. For this reason, the rectangle should be broken Applied Computer Systems _________________________________________________________________________________________________2022/27 is sufficiently smooth (i. e., it does not contain piecewise dc − 33 ⋅− dc c+ d ( ) yy=+ = c+ = . (8) constant parts) and bounded. Whether surface f(x, y) is 4 44 continuous on rectangle (5) or not, this surface is supposed to have no points of jump discontinuity. In addition, it is supposed Alternatively, points (1) are calculated starting from the middle that the surface has an extremum on open rectangle point: ab + ax + x + b ab ;; × c d . (6) ( ) ( ) 2 2 , , , (9) x = x = x = 2 1 3 2 2 2 However, this supposition may be false and so the surface can and have no extrema on rectangle (5). The supposition about that the surface is a strictly unimodal two-variable function with an cd + cy + yd + 2 2 extremum inside rectangle (5) is not made. y = , y = , y = . (10) 2 1 3 2 2 2 IV. NINE-POINT RECTANGLE DICHOTOMY There are six inputs to algorithm #1: the interval endpoints Why two-by-two points are insufficient to be probed in a, b, c, d, a formula for evaluating surface f(x, y) at any point searching for a minimum is illustrated in Fig. 3. Therefore, the [ xy]∈ , and tolerance ε (a sufficiently small positive case with three-by-three points must be studied. Let these points number). x1, x2, x3, y1, y2, y3 be selected uniformly within rectangle (5) as a 3×3 lattice (Fig. 4): a x x x b 1 2 3 Fig. 3. Examples of when probing only two-by-two internal points leads to Fig. 4. Probing three-by-three internal points as a 3×3 uniform lattice. The cutting off the subrectangle containing the least value of a surface on a given endpoints of the initial rectangle and nodes of this lattice determine altogether rectangle (the surface value is proportional to the circle size, and the least value 16 equal subrectangles. is marked by square; the area which is cut off is darkened). At the first (initial) step of algorithm #1, the surface is ba −+ 3a b xa=+ = , 1 evaluated at points 4 4 ba − ba −+ a b xx=+ = a+ = ,  x y  for i = 1, 3 and j = 1, 3 (11) i j   4 22 ba − 33 ⋅− ba a+ b ( ) , (7) xx=+ = a+ = by (9), (10), and the minimum of nine values 4 44 fx , y (12) and { ( )} { ij } i=1 j=1 d −+ c 3cd yc=+ = , is found. Denote this minimum by z . Point 1 * 4 4 dc − dc −+ c d yy=+ = c+ = x y ∈  x y  (13) [ ] { } 4 22 ** { i j }   i=1 j=1 Applied Computer Systems _________________________________________________________________________________________________2022/27 at which this minimum is found is stored. Minimum value ax = , dy = (27) 2 2 z = fx , y is stored as well. Here, ∆= x x− x and ( ) * ** 2 1 and (9), (10) are re-specified; if ∆yy− y . While ∆x>ε and ∆y >ε , the following routine is executed. z = fx( , y ) (28) * 32 If then z = fx , y (14) ( ) * 11 ax = , c= y , dy = (29) 2 1 3 then and (9), (10) are re-specified; if bx = , dy = (15) 2 2 z = fx( , y ) (30) * 33 and (9), (10) are re-specified; if then (16) z = fx( , y ) * 12 ax = , c= y (31) 2 2 then and (9), (10) are re-specified. If bx = , c= y , dy = (17) 2 1 3 max fx , y = fx( , y ) (32) { ( )} and (9), (10) are re-specified; if { ij } 22 i=1 j=1 z = fx( , y ) (18) * 13 then algorithm #1 stops returning the specific answer implying that the area, apart from the local minimum, contains other then extrema. This answer is returned in the form of the empty set of surface minimum ( M = ∅ ) and four subrectangles bx = , c= y (19) 2 2 [ax ;; ]×[c y ] , [ax ;; ]×[ y d] , 22 2 2 and (9), (10) are re-specified; if x ;; b × cy , x ; b × y ; d . (33) [ ] [ ] [ ] [ ] 22 22 z = fx( , y ) (20) * 21 Otherwise, if (32) is false, the minimum of values (3) is found, values then {x ,, y z } (34) ax = , bx = , dy = (21) * ** 1 3 2 are stored, and new and are ∆= x x− x ∆yy− y and (9), (10) are re-specified; if 2 1 21 calculated. z = fx( , y ) (22) All the 10 cases of the routine conditions and their outcomes * 22 are illustrated in Fig. 5, where four subrectangles (33) followed then condition (32) are seen in the last subplot. Upon the other nine conditions, unlike using the golden-section search, by which the ax = , bx = , c= y , dy = (23) area-to-search is decreased by 1 3 1 3 and (9), (10) are re-specified; if ≈ 2.618 times, 5− 1 5− 1 6− 2 5 z = fx( , y ) (24) * 23 then the area-to-search by algorithm #1 becomes decreased by 4 times. This means that algorithm #1 may converge faster than ax = , bx = , c= y (25) 1 3 2 the golden-section search. Nevertheless, it cannot guarantee that the global minimum will be not omitted. and (9), (10) are re-specified; if In fact, algorithm #1 is a nine-point rectangle dichotomy running while ∆x>ε or ∆y >ε unless condition (32) turns z = fx , y (26) ( ) * 31 true. The returned output is either M ={x ,, y z } or M = ∅ * * ** * and four subrectangles (33). then = Applied Computer Systems _________________________________________________________________________________________________2022/27 Condition (14) Condition (16) Condition (18) Condition (20) Condition (22) Condition (24) Condition (26) Condition (28) Condition (30) Condition (32) Fig. 5. The 10 possible cases and their outcomes (the area which is cut off is darkened) issued from one of conditions (14), (16), (18), (20), (22), (24), (26), (28), (30), (32), where the least value is marked by square and the maximum by (32) is marked by circle. Applied Computer Systems _________________________________________________________________________________________________2022/27 Algorithm #1 itself is insufficient to find all minima of a then [ xy ] is deleted as it is too close to one of the four sides ** surface. Nevertheless, owing to third points along the axes of of the subrectangle (and, thus, the surface is supposed to have x and , it can sometimes outperform any two-by-two-point no “internal” minima on this subrectangle). Otherwise, search (at least by not returning a “false” local minimum and [ xy ] is stored along with fx( , y ) as a local minimum on ** ** simultaneously losing a “real” local minimum like in the subrectangle (40). If M = ∅ for subrectangle (40), then examples in Figs. 1–3). Subrectangles (33) that follow algorithm #1 is applied to the four subrectangles condition (32) are to be further studied whether each contains minima. a + b cd +     kk l l ac ;; × , (42) kl         V. RUNNING THE NINE-POINT RECTANGLE DICHOTOMY ON A SET OF SUBRECTANGLES a ++ b cd     kk l l ad ;; × , (43) kl     As the initial rectangle (5) may contain multiple local  22    minima, it is reasonable to break the initial rectangle into a set a ++ b cd     kk l l of equal subrectangles, for each of which a local minimum , (44) ;; bc × kl         would be found or the specific answer would be returned by algorithm #1. Each of the intervals [a; b] and [c; d] is to be a ++ b cd     kk l l ;; bd × . (45) broken into a set of equal-length subintervals. Let us denote the kl      22    number of such subintervals along each axis by N. Then there are N equal subrectangles. These subrectangles are For each of subrectangles (42)–(45) the eliminating-endpoint condition is checked. A minimum a ;; b × cd (35) {[ ] [ ]} { } kk l l k=1 l=1 ( leftbot ) ( leftbot ) , (46)  xy **  by if it is found on subrectangle (42), is deleted if ba − aa = , ba+ , b = b (36) 1 1 N ab + ( leftbot ) ( leftbot ) kk or or xa − <ε x − <ε * k * and cd + ( leftbot ) ( leftbot ) ll or ; (47) yc − <ε y − <ε * l * ba − ab = by ba + for kN = 2, (37) kk−1 kk ( leftbot ) ( leftbot ) otherwise, (46) is stored along with fx , y as a ( ) ** and local minimum on subrectangle (42). A minimum dc − ( lefttop) ( lefttop)  xy , (48) cc = , dc+ , dd = (38)  ** 1 1 N if it is found on subrectangle (43), is deleted if and ab + ( lefttop) ( lefttop ) kk dc − xa − <ε or x − <ε or * k * cd = by dc+ for lN = 2, . (39) ll−1 ll cd + ( lefttop) ll ( lefttop) y − <ε or yd − <ε ; (49) * * l There are seven inputs to algorithm #2 (which will incorporate algorithm #1): the interval endpoints a, b, c, d, a ( lefttop) ( lefttop) formula for evaluating surface f(x, y) at any point [ xy]∈ otherwise, (48) is stored along with as a fx , y ( ) ** , tolerance ε, the number of subintervals N. At the first step of local minimum on subrectangle (43). A minimum algorithm #2, algorithm #1 is applied to subrectangle ( rightbot ) ( rightbot )  xy  , (50)  **  a ;; b × cd for kN = 1, and lN = 1, . (40) [ ] [ ] kk l l if it is found on subrectangle (44), is deleted if If [ xy ] is found for subrectangle (40), i. e. M ≠∅ for ** * ab + this subrectangle, then the condition eliminating endpoints is ( rightbot ) ( rightbot ) kk x − <ε or xb − <ε or * * k checked. If cd + ( rightbot ) ( rightbot ) ll y − c <ε or y − <ε ; (51) xa − <ε or xb − <ε or * l * * k * k yc − <ε or yd − <ε (41) * l * l = Applied Computer Systems _________________________________________________________________________________________________2022/27 ( rightbot ) ( rightbot ) ( obs ) (b ) ( obs ) otherwise, (50) is stored along with fx , y as a SS = , S =  by  S , (60) ( ) { } * * ** * **  local minimum on subrectangle (44). A minimum ( obs ) (b ) ( obs ) ZZ = , Z = f by ,  Z . (61) { ( )} ** * ** ( righttop) ( righttop)  x y  , (52) **   () c Let us denote a local minimum of function f ( xc , ) by x . If if it is found on subrectangle (44), is deleted if () c f x , c < max fS( ) (62) ( ) ab + ( righttop) ** kk ( righttop) x − <ε or xb − <ε or * * k then point cd + ( righttop) ll ( righttop) y − <ε or yd − <ε ; (53) * * l () c [ xy]= x c (63)  * ( righttop) ( righttop ) otherwise, (52) is stored along with fx , y as a ( ) ** can be counted a local boundary minimum and local minimum on subrectangle (45). ( obs ) ( c ) ( obs ) At the second step of algorithm #2, all the local minima SS = , S = x c  S , (64)  { } ** ** *  found on N subrectangles (35) are aggregated into a set S ( obs ) ( c ) ( obs ) whose elements are sorted in ascending order. Let ZZ = , Z = fx , c  Z . (65) Z = fS( ) { ( )} ** ** ** * be a set of surface values at the local minima in . Then all () d local minima of functions Let us denote a local minimum of function f ( xd , ) by x . If f ay , , f by , , f xc , , f xd , ( ) ( ) ( ) ( ) () d f x , d < max fS (66) ( ) ( ) * * are found on intervals then point [cd ; ] , [cd ; ] , [ab ; ] , [ab ; ] , () d xy = x d (67) [ ]  * respectively. Let us denote a local minimum of function () a f ( ay , ) by y . If can be counted a local boundary minimum and () a f ay , < max f ( S ) (54) ( ) * * ( obs ) ( d ) ( obs ) SS = , S = xd  S , (68)  { } ** ** *  ( obs ) ( d ) ( obs ) then point ZZ = , Z = fx , d  Z . (69) { ( )} ** ** * () a [ xy]= a y (55)  At the end of the routine, algorithm #2 returns sets S and Z . * * can be counted a local boundary minimum and VI. COVERING A RANGE OF SUBRECTANGLE SETS Algorithm #2, if number N is sufficiently great, returns all ( obs ) ( a ) ( obs ) SS = , S =  a y  S , (56) { } ** *  * * the local minima of a surface, including its local boundary minima, if any, satisfying inequalities (54), (58), (62), (66). ( obs ) ( a ) ( obs ) ZZ = , Z = f ay ,  Z . (57) { ( )} ** * * * How to guess the sufficiently great number of subintervals? Inputting always very great numbers of subintervals into () b algorithm #2 will significantly slow down finding minima. Let us denote a local minimum of function f (by , ) by y . Inputting a fewer number of subintervals may lead to losing If some minima. Thus, it is reasonable to try a set of these numbers to see whether new minima appear as number N is increased. () b f by , < max f ( S ) (58) ( ) * * There are eight inputs to algorithm #3 (which will incorporate algorithm #2): the interval endpoints a, b, c, d, a then point formula for evaluating surface f(x, y) at any point [ xy]∈ , tolerance ε, the minimal number of subintervals N , the min () b [ x y]= by (59)  maximal number of subintervals N . At the first step of max algorithm #3, algorithm #2 is applied by NN = . While min can be counted a local boundary minimum and Applied Computer Systems _________________________________________________________________________________________________2022/27 ( j ,) h ( j−1,) h M = ∅ and NN < , the number of subintervals is * max max yy − <ε (76) * * hH =1, increased by 1 and algorithm #2 is applied again. If M = ∅ at NN = , then all local minima of surface f(x, y) along the max then algorithm #2 is applied by the number of subintervals 2N boundaries of rectangle (5) are found. First, whereas S = ∅ returning a set S denoted by W . If * * * and Z = ∅ , the set of local minima of function f(a, y) is found () j () a () aa () WS = (77) ** as Y , and updates (56), (57) are fulfilled ∀∈ yY . Then * ** () b the set of local minima of function f (by , ) is found as Y , by () bb () and updates (60), (61) are fulfilled ∀ y ∈ Y . The set of local ** () hh () () c , (78) W =  tu { } minima of function f(x, c) is found as X , and updates (56), * **  * h=1 () cc () (57) are fulfilled ∀∈ xX . The set of local minima of ** then it is checked whether those H values in sets (77) and (78) () d function f(x, d) is found as X , and updates (56), (57) are * are sufficiently close. Thus, if () dd () fulfilled ∀∈ x X . Finally, algorithm #3 stops returning ** ( h ) (, j h) max tx − <ε (79) ** sets S and Z . hH =1, * * Otherwise, if a nonempty set S is found at some N = N and (i. e., M ≠∅ at N = N ), a counter of the number of ( h ) (, j h) subintervals is set at 1: j = 1. Besides, the found sets S and Z * * max uy − <ε (80) ** hH =1, are stored indicating the counter: (1) (1) then algorithm #3 stops returning SS = , ZZ = . (70) ** ** (1 j− ) (1 jh − , ) (1 jh − , ) SS  x y , (81) { } ** * *  Then, at the second step of algorithm #3, the number of h=1 subintervals is increased by 1 and counter j is increased by 1, H H (1 j− ) (1 jh − , ) (1 jh −− , ) (1 jh , ) Z Z z fx , y . (82) whereupon algorithm #2 is applied by this new number of { } { ( )} ** * * * hh 11 subintervals N and In fact, algorithm #3 covering a range of algorithm #2 runs is () j () j S = S , Z = Z . (71) * * ** a nine-point iterated rectangle dichotomy. It is worth noting that algorithm #3 does not return the specific answer. The specific If answer is an internal object of algorithm #1 serving to divide a subinterval further. ( j ) ( j−1) (72) SS = If the task is to find the global minimum, then it is fulfilled ** as a trivial appendix to the nine-point iterated rectangle dichotomy. As sets (81), (82) are obtained, the global minimum by function value (1 j− ) (1 jh − , ) (1 jh − , ) S =  xy (73) { } * **  h=1 ( jh −1, ) zz = min (83) ** * hH =1, and is calculated and the global minimum point ( j ) (, j h) (, j h) S =  xy , (74) { } * **  h=1 ( jh −1, ) ( jh −1, ) [ xy ]∈  x y (84) { } ** **  * * h=1 then it is checked whether those H values in sets (73) and (74) are sufficiently close (or, being accurate to ε, are practically the at which same). Thus, if z = fx , y (85) ( ) ** ** ** ( j ,) h ( j−1,) h max xx − <ε (75) ** hH =1, is extracted. If the global minimum is to be found purely on open rectangle (6), then the global minimum function and value ( j−1) ( a ) (bc ) ( ) ( d ) z = min Z \ f a, y , f b, y , fx , c , fx , d { { ( ) ( ) ( ) ( )}} ** * * * * * () aa () () bb () () cc () () dd () ∀∈ yY and ∀ y ∈ Y and ∀∈ xX and ∀∈ x X (86) ** ** ** ** == = = = = = Applied Computer Systems _________________________________________________________________________________________________2022/27 is calculated and global minimum point (84) at which (85) holds genetic algorithm, specific surface instances are taken that have is extracted. multiple extrema [19]. The respective experiments confirm that the nine-point iterated rectangle dichotomy outperforms those VII. EXAMPLES and other approaches. An example of applying the nine-point iterated rectangle dichotomy is shown in Fig. 6, where a toy To compare performance of the suggested method to the surface performance of the golden-section search, ternary search, a local minimum a local minimum the global minimum found f xy , ( ) found by both found by the by the nine-point iterated the ternary search golden-section rectangle dichotomy but and genetic search “unseen” by the other algorithms algorithms Fig. 6. The 16 local minima (two of which are unseen from this view angle), including both the local boundary minima and the global minimum of surface (87) on rectangle (88) (the surface value axis is shown in reverse direction for easier observation of minima). π ππ      f xy , 1.2 sin 0.28x cos 0.238 y−+ sin 0.8x− cos 0.58 y−+ ( ) ( )      3 4 6      22 2 2 π π    0.0004⋅+ xx 11 0.0004⋅−17 0.0006⋅ y+12 0.0006⋅ y−24 ( ) ( ) ( ) ( ) +2 sin 0.33x− cos 0.22 ye −+ + e + e + e (87)     54   on rectangle minimized on rectangle (88) [−11; 17]×[−12; 24] −2; 8 × −1; 4 (90) [ ] [ ] is used to model the minimization problem. The 16 local where the surface (unbeknown to a researcher in reality) has a minima (including the global minimum) are found by N = 15, sort of modulation. This one is a more difficult example for i. e. by breaking the initial rectangle into 225 subrectangles, both the golden-section search and ternary algorithms. where initially N = 2 and N = 24. Except for the nine-point min max The 15 local minima (including the global minimum) are found dichotomy approach, none of the other algorithms finds the here by N = 37 (where N = 2 and N = 44). Therefore, the min max global minimum. Another example is presented in Fig. 7 for a algorithm takes here 1369 subrectangles to determine those 15 toy surface “stable” minima. Compared to the example in Fig. 6 with surface (87), the example in Fig. 7 with surface (90) f ( xy , )= can be thought as a more computationally expensive (despite π ππ roughly the same surface complexity and the number of local       sin sin 1.1x− sin 1.8 y−− cos 0.3xy (89) ( )       minima).   8  38   = Applied Computer Systems _________________________________________________________________________________________________2022/27 a local minimum found by the global minimum found both the ternary search by the nine-point iterated and golden-section search rectangle dichotomy and algorithms genetic algorithm f ( xy , ) Fig. 7. The 15 local minima (one of which is unseen from this view angle), including both the local boundary minima and the global minimum of surface (89) on rectangle (90) (the surface value axis is shown in reverse direction for easier observation of minima). VIII. DISCUSSION the respective surfaces are defined). Then set S lacks some The presented approach is a significant and important minima. If a researcher deals with presumably highly fluctuating surfaces (data), the part in algorithm #3 can be contribution to the field of numerical estimation and approximate analysis. It is clear that it is applicable to slightly modified: if inequalities (75) and (76) hold then algorithm #2 can be applied by the number of subintervals mN , whichever task of finding local extrema. If the purpose is to find j all local maxima or the global maximum of the surface, the where m > 2. Reversely, studying rare-fluctuating surfaces (data) may be more efficient if, for instance, m = 1.5 or about presented approach is applied to surface −f(x, y). In real-world that. contemporary practice, it must serve as a computationally It is worth noting that if the ratio of lengths of intervals [a; b] efficient tool for optimization tasks where exact and full-scale and [c; d] is too far from 1, the initial rectangle (5) is too numerical methods are inapplicable. In addition, the fact of that stretched. Then, obviously, any subrectangle will be of the same often the objective function or surface is defined on a (known) (bad) aspect ratio. A problem may arise such that some local discrete set (e. g., in searching for an optimal size). Using a minima along the dimension of the longer interval (or, in simple numerical method always implies that the function (surface) is words, along the “stretched” dimension) may remain “unseen” considered a finite set of values corresponding to a finite set of by algorithm #3. To avoid this situation, it is recommended to points to which those values are assigned. The examples are the keep the rectangle aspect ratio close to 1:1. However, if the problem of fine-tuning hyperparameters and training surface fluctuates rarely along a dimension, this dimension can parameters of neural networks, adjusting parameters of be left “stretched” (i. e., a much shorter interval in the other complex systems (like radars, radio telescopes, massive dimension can be considered). engines, big-scaled constructions, etc.), and optimal If the two-variable function is unimodal, the nine-point parametrization at all. Without knowing additional information about the surface, iterated rectangle dichotomy is slower than the golden-section unfortunately, there cannot be assurance of that every local search. The slowdown is relatively significant if to measure the minimum will be included into the output set S . Doubling the computational time for a long series of the minimization problems. Nevertheless, the function unimodality is a rare number of subintervals along each dimension (axis) to see occasion in real-world practice. Even if not all local minima (or whether the same minima remain, when (77) and (79), (80) are maxima) are to be found, the golden-section search may fail to expected to be true, may fail in a case if the surface (strictly “hit” the global minimum (or maximum), whereas the nine- speaking, the data) is overly fluctuating (something similar to Figs. 6 and 7, if to consider much larger rectangles, on which point iterated rectangle dichotomy completes the task. Applied Computer Systems _________________________________________________________________________________________________2022/27 [12] S. Edelkamp and S. Schrödl, “Chapter 7 – Symbolic search,” in Heuristic IX. CONCLUSION Search, S. Edelkamp, S. Schrödl, Eds. Morgan Kaufmann, 2012, pp. 283– The suggested method of nine-point iterated rectangle 318. https://doi.org/10.1016/B978-0-12-372512-7.00007-9 [13] Ş. E. Amrahov, A. S. Mohammed, and F. V. Çelebi, “New and improved dichotomy is an approach to finding all local extrema of an search algorithms and precise analysis of their average-case complexity,” unknown two-variable function bounded on a given rectangle Future Generation Computer Systems, vol. 95, pp. 743–753, Jun. 2019. regardless of the rectangle area. The five of eight inputs of the https://doi.org/10.1016/j.future.2019.01.043 [14] G. S. Rani, S. Jayan, and K. V. Nagaraja, “An extension of golden section method routine are straightforwardly defined, whereas the algorithm for n-variable functions with MATLAB code,” in IOP Conf. tolerance with the minimal and maximal numbers of Series: Materials Science and Engineering, IOP Publishing, 2019, subintervals along each dimension are adjustable. These vol. 577, Art. no. 012175. subinterval numbers are primary adjustable inputs. Although https://doi.org/10.1088/1757-899X/577/1/012175 [15] A. Kheldoun, R. Bradai, R. Boukenoui, and A. Mellit, “A new Golden the method does not assure obtaining all local minima (or Section method-based maximum power point tracking algorithm for maxima) for any surface, setting appropriate minimal and photovoltaic systems,” Energy Conversion and Management, no. 111, pp. maximal numbers of subintervals makes missing some minima 125–136, Mar. 2016. https://doi.org/10.1016/j.enconman.2015.12.039 (or maxima) very unlikely. The tolerance is the secondary [16] J.-D. Lee, C.-H. Chen, J.-Y. Lee, L.-M. Chien, and Y.-Y. Sun, “The adjustable input. If the minimal and maximal numbers of Fibonacci search for cornerpoint detection of two-dimensional images,” subintervals are selected too small, setting whichever small Mathematical and Computer Modelling: An International Journal, vol. 16, no. 11, pp. 15–20, Nov. 1992. tolerance cannot help in finding every local extremum. https://doi.org/10.1016/0895-7177(92)90102-Q The endpoints of the initial intervals constituting the [17] L. D. Chambers, Ed. The Practical Handbook of Genetic Algorithms: rectangle and a formula for evaluating the surface at any point Applications (2nd edition). Chapman and Hall/CRC, 2000. https://doi.org/10.1201/9781420035568 of this rectangle are inputted along with the three adjustable [18] D. E. Goldberg, Genetic Algorithms in Search, Optimization & Machine inputs. Having broken the initial rectangle into a set of Learning. Addison-Wesley, 1989. subrectangles, the nine-point iterated rectangle dichotomy [19] R. Horst and P. M. Pardalos, Eds., Handbook of Global Optimization. “gropes” around every local minimum by successively cutting Nonconvex Optimization and Its Applications, vol. 2. Springer, Boston, MA, 1995. https://doi.org/10.1007/978-1-4615-2025-2 off 75 % of the subrectangle area or dividing the subrectangle [20] F. Neri, G. Iacca, and E. Mininno, “Compact optimization” in Handbook in four. A range of subrectangle sets defined by the minimal and of Optimization. From Classical to Modern Approach, I. Zelinka, V. maximal numbers of subintervals along each dimension is Snášel, and A. Abraham, Eds. Springer-Verlag Berlin Heidelberg, 2013, pp. 337–364. https://doi.org/10.1007/978-3-642-30504-7_14 covered by running the nine-point rectangle dichotomy on [21] C. Thornton, F. Hutter, H. H. Hoos, and K. Leyton-Brown, “Auto- every set of subrectangles. As a set of values of currently found WEKA: Combined selection and hyperparameter optimization of local minima points changes no more than by the tolerance, the classification algorithms,” in KDD’13 Proceedings of the 19th ACM SIGKDD international conference on knowledge discovery and data set of local minimum points and the respective set of minimum mining, Chicago, Illinois, USA, Aug. 2013, pp. 847–855. values of the surface are returned. https://doi.org/10.1145/2487575.2487629 [22] H. Cai, J. Lin, and S. Han, “Chapter 4 – Efficient methods for deep REFERENCES learning,” in Computer Vision and Pattern Recognition. Advanced Methods and Deep Learning in Computer Vision, E. R. Davies and M. A. [1] M. L. Lial, R. N. Greenwell, and N. P. Ritchey, Calculus with Turk, Eds. Academic Press, 2022, pp. 159–190. Applications (11th edition). Pearson, 2016. https://doi.org/10.1016/B978-0-12-822109-9.00013-8 [2] L. D. Hoffmann, G. L. Bradley, and K. H. Rosen, Applied Calculus for [23] J. Waring, C. Lindvall, and R. Umeton, “Automated machine learning: Business, Economics, and the Social and Life Sciences. McGraw-Hill Review of the state-of-the-art and opportunities for healthcare,” Artificial Higher Education, 2005. Intelligence in Medicine, vol. 104, Art. no. 101822, Apr. 2020. [3] S. A. Vavasis, “Complexity issues in global optimization: A survey,” in https://doi.org/10.1016/j.artmed.2020.101822 Handbook of Global Optimization. Nonconvex Optimization and Its Applications, vol. 2, R. Horst and P. M. Pardalos, Eds. Springer, Boston, Vadim V. Romanuke was born in 1979. He graduated from the Technological MA, 1995, pp. 27–41. https://doi.org/10.1007/978-1-4615-2025-2_2 University of Podillya in 2001. In 2006, he received the Degree of Candidate [4] J. Stewart, Calculus: Early Transcendentals (6th edition). Brooks/Cole, of Technical Sciences in Mathematical Modelling and Computational Methods. The Candidate Dissertation suggested a way of increasing interference noise [5] E. Hewitt and K. R. Stromberg, Real and Abstract Analysis. Springer, immunity of data transferred over radio systems. The degree of Doctor of 1965. https://doi.org/10.1007/978-3-642-88044-5 Technical Sciences in Mathematical Modelling and Computational Methods [6] K. R. Stromberg, Introduction to Classical Real Analysis. Wadsworth, was received in 2014. The Doctor-of-Science Dissertation solved a problem of increasing efficiency of identification of models for multistage technical control [7] R. Fletcher, Practical Methods of Optimization (2nd edition). J. Wiley and and run-in under multivariate uncertainties of their parameters and Sons, Chichester, 1987. relationships. In 2016, he received the status of a Full Professor. [8] J. Kiefer, “Sequential minimax search for a maximum,” Proceedings of He is a Professor of the Faculty of Mechanical and Electrical Engineering at the the American Mathematical Society, vol. 4, no. 3, pp. 502–506, 1953. Polish Naval Academy. His current research interests concern decision making, https://doi.org/10.2307/2032161 game theory, semantic image segmentation, statistical approximation, and [9] M. Avriel and D. J. Wilde, “Optimality proof for the symmetric Fibonacci control engineering based on statistical correspondence. He has 395 scientific search technique,” Fibonacci Quarterly, no. 4, pp. 265–269, 1966. articles, one monograph, one tutorial, and four methodical guidelines in [10] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Functional Analysis, Mathematical and Computer Modelling Master Thesis “Minimization or maximization of functions,” in Numerical Recipes: The development, Conflict-Controlled Systems, Master Academic Practice. Before Art of Scientific Computing (3rd edition), Cambridge University Press, January 2018, Vadim Romanuke was the scientific supervisor of a Ukrainian New York, 2007, pp. 487–562. budget grant work concerning minimization of water heat transfer and [11] K. J. Overholt, “Efficiency of the Fibonacci search method,” BIT consumption. Numerical Mathematics, vol. 13, no. 1, pp. 92–96, Mar. 1973. Address for correspondence: 69 Śmidowicza Street, Gdynia, Poland, 81-127. https://doi.org/10.1007/BF01933527 E-mail: romanukevadimv@gmail.com ORCID iD: https://orcid.org/0000-0003-3543-3087 http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Applied Computer Systems de Gruyter

Nine-Point Iterated Rectangle Dichotomy for Finding All Local Minima of Unknown Bounded Surface

Applied Computer Systems , Volume 27 (2): 12 – Dec 1, 2022

Loading next page...
 
/lp/de-gruyter/nine-point-iterated-rectangle-dichotomy-for-finding-all-local-minima-hoJ8WRPZhk

References (7)

Publisher
de Gruyter
Copyright
© 2022 Vadim Romanuke, published by Sciendo
ISSN
2255-8691
eISSN
2255-8691
DOI
10.2478/acss-2022-0010
Publisher site
See Article on Publisher Site

Abstract

Applied Computer Systems ISSN 2255-8691 (online) ISSN 2255-8683 (print) December 2022, vol. 27, no. 2, pp. 89–100 https://doi.org/10.2478/acss-2022-0010 https://content.sciendo.com Nine-Point Iterated Rectangle Dichotomy for Finding All Local Minima of Unknown Bounded Surface Vadim Romanuke Polish Naval Academy, Gdynia, Poland Abstract – A method is suggested to find all local minima and the a numerical method, but this way is impracticable if computing global minimum of an unknown two-variable function bounded on function values either takes unreasonably long time or is too a given rectangle regardless of the rectangle area. The method has expensive [2], [3], [7]. Thus, no numerical method is applicable eight inputs: five inputs defined straightforwardly and three inputs, as there is no object (i. e., tabulated function) to which it might which are adjustable. The endpoints of the initial intervals be applied. constituting the rectangle and a formula for evaluating the two- It is obvious that finding minima of a one-variable function variable function at any point of this rectangle are the is easier than that of two-variable function. The existing straightforward inputs. The three adjustable inputs are a tolerance with the minimal and maximal numbers of subintervals along each methods of finding minimum without involving the one- dimension. The tolerance is the secondary adjustable input. Having variable function equation and function tiny-step evaluation broken the initial rectangle into a set of subrectangles, the nine- rely on successively narrowing the range of values on the point iterated rectangle dichotomy “gropes” around every local specified interval [8], [9]. The most efficient and robust minimum by successively cutting off 75 % of the subrectangle area methods are the method of golden-section search [10], the or dividing the subrectangle in four. A range of subrectangle sets Fibonacci search technique [11], and the ternary search defined by the minimal and maximal numbers of subintervals along each dimension is covered by running the nine-point rectangle algorithm [12], [13]. dichotomy on every set of subrectangles. As a set of values of The golden-section search maintains the function values for currently found local minima points changes no more than by the four points whose three interval widths are in the golden ratio tolerance, the set of local minimum points and the respective set of [10], [14], [15]. These ratios are maintained for each iteration minimum values of the surface are returned. The presented and are maximally efficient. For a strictly unimodal function approach is applicable to whichever task of finding local extrema with an extremum inside the interval, the golden-section search is. If primarily the purpose is to find all local maxima or the global will find that extremum, while for an interval containing maximum of the two-variable function, the presented approach is applied to the function taken with the negative sign. The presented multiple extrema (possibly including the interval boundaries), approach is a significant and important contribution to the field of it will converge to one of them. This is an obvious demerit of numerical estimation and approximate analysis. Although the the method because one cannot be sure that an interval contains method does not assure obtaining all local minima (or maxima) for a single minimum and so other minima are just omitted. The any two-variable function, setting appropriate minimal and golden-section search can be used for finding a minimum of a maximal numbers of subintervals makes missing some minima (or two-variable function as well by using the same iteration maxima) very unlikely. technique. Keywords – Finding extrema, local minima, rectangle The Fibonacci search technique derived from the golden- dichotomy, subrectangles, unknown two-variable function. section search is a similar algorithm to find the extremum (minimum or maximum) of a finite sequence of values that has I. PRACTICAL ISSUES OF FINDING A MINIMUM a single local minimum or local maximum [11], [16]. The algorithm maintains a bracketing of the solution in which the When a function (of one, two, or more variables) is given in length of the bracketed interval is a Fibonacci number. its equation or a set of equations (in the “curly bracket” form, Obviously, the Fibonacci search technique may also fail in subdomain-wise), finding the global minimum of the function determining the global minimum if there are multiple minima is an academic task fulfilled either by algebraic (symbolic) or on an interval. numerical methods [1], [2]. In most practical problems, the A ternary search aims at finding the minimum of a unimodal function is unknown (i. e., its equation representation is function (of one, two, or more variables) [12]. It determines uncertain or unknown), so its derivatives are not available and either that the minimum cannot be in the first third of the domain neither a symbolic approach nor a numerical method is or that it cannot be in the last third of the domain, and then applicable [3], [4]. Another practical issue arises when a repeats on the remaining two thirds. In the case of a one-variable symbolic (exact) approach cannot be applied because an function, the ternary search is slightly faster that the golden- equation contains analytically nondifferentiable parts [5], [6]. section search, but it also may omit the global minimum. Then the global minimum can be determined approximately by Corresponding author’s e-mail: romanukevadimv@gmail.com ©2022 Vadim Romanuke. This is an open access article licensed under the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0). Applied Computer Systems _________________________________________________________________________________________________2022/27 The global minimum can be found by the genetic algorithm algorithms, the genetic algorithm is more reliable as omitting [17], [18]. The genetic algorithm based on imitating a biological the global minimum is less likely (when the function is not evolution with selection and crossover of solution candidates is unimodal [19], [20]) – see an example in Fig. 1, where the far more reliable in determining the minimum. Although being global minimum is to be found for a two-variable function far slower than the golden-section search and ternary search (surface) π ππ      f xy , 1.2 sin 0.28x cos 0.238 y−+ sin 0.8x− cos 0.58 y−+ ( ) ( )       3  4 6 22 2 2 π π    0.0004⋅+ ( x 7) 0.0004⋅− ( xy 8) 0.0006⋅( +6) 0.0006⋅( y−11) (1) +2 sin 0.33x− cos 0.22 ye −+ + e + e + e       f xy , ( ) the global minimum found by the genetic algorithm the local minimum found by the golden-section search and ternary search algorithms Fig. 1. An example of when both the golden-section search and ternary search find a local minimum and omit the global minimum, whereas the global minimum is found by the genetic algorithm (the surface value axis is shown in reverse direction for easier observation of minima). the equation of the function and the rectangle) that need to be defined on rectangle specified. However, often even the genetic algorithm by −7; 8 × −6; 11 (2) [ ] [ ] accurately specified parameters fails to find the global minimum. A counterexample is shown in Fig. 2, where the by x∈ −7; 8 , y∈ −6; 11 . The genetic algorithm used for this [ ] [ ] global minimum is to be found for a surface (a slight example is run by a series of its input parameters (apart from modification of that in Fig. 1) π ππ      f ( xy , ) 1.2 sin( 0.28x) cos 0.238 y−+ sin 0.8x− cos 0.58 y−+      3 4 6      22 2 2 π π    0.0004⋅+ ( x 9) 0.0004⋅− ( x 9) 0.0006⋅( y+9) 0.0006⋅( y−11) +2 sin 0.33x− cos 0.22 ye −+ + e + e + e (3)     54   defined on rectangle solution candidates. In general, genetic algorithms converge slower as the problem size grows (the rectangle area, in this [−9; 9]×[−9; 11] (4) particular case). Therefore, if the function evaluation is expensive (in the sense of either computational time or a factual by x∈[−9; 9] , y∈[−9; 11] . Besides, the genetic algorithm is cost to evaluate a value of the function), the genetic algorithm far slower than both the golden-section search and ternary is practically inappropriate (unacceptable). search algorithms as it must operate on rather a great deal of = Applied Computer Systems _________________________________________________________________________________________________2022/27 the local minimum found by the genetic algorithm, and by the golden-section search and ternary search algorithms the global minimum “unseen” by the algorithms the local minimum “unseen” by the algorithms f ( xy , ) Fig. 2. An example of when none of the algorithms “sees” the global minimum, but each of the algorithms found the same local minimum omitting another local minimum whose surface value is greater (the surface value axis is shown in reverse direction for easier observation of minima). into narrower subrectangles. To achieve the goal, the following Apart from the need to determine the global minimum, often objectives are to be accomplished: all local minima of an unknown surface are required to be 1. To impose specific conditions on the surface and its found. However, the mentioned methods allow determining (or extrema on the bounded rectangle. locating) only one minimum on a rectangle, and this minimum 2. To suggest a method (algorithm #1) for determining a may be not global (Fig. 2). To find all local minima on a local minimum on the rectangle or returning a specific answer rectangle, this rectangle should be broken into a subset of implying that the rectangle, apart from the local minimum, subrectangles, on each of which no more than a single local contains other extrema. minimum is supposed to be. Obviously, this supposition is not 3. To suggest a method (algorithm #2) of breaking the initial always true, so some local minima including the global one may rectangle into a set of subrectangles, for each of which a local be lost. An example is the problem of fine-tuning minimum is found or the specific answer is returned by hyperparameters and training parameters of neural networks algorithm #1. Thus, algorithm #1 is to be incorporated into [21], [22]. An objective function is unknown and its evaluation algorithm #2. is either time-consuming or resource-consuming, or both [23]. 4. To suggest a method (algorithm #3) of adjusting Thus, neither exact methods nor numerical methods can be algorithm #2 so that all the local minima would be determined efficiently applied to find (quasi-)optimal parameters. In with an acceptable accuracy (tolerance) and no specific answer addition, knowing all local minima can be useful to form a set would be returned by algorithm #1. Algorithm #3 should of (quasi-)optimal parameters, because often the objective incorporate algorithm #2 to be a novel approach of finding local function value at the global minimum is not significantly lower minima of a surface. than the objective function value at a local minimum, and the 5. To exemplify the suggested approach. local minimum point may have more appropriate components 6. To discuss applicability and significance of the approach. to use them in practice. 7. To make an appropriate conclusion on it. OAL AND OBJECTIVES TO BE ACCOMPLISHED II. G III. SURFACE AND ITS EXTREMA ON A BOUNDED RECTANGLE As the existing approaches are incapable of determining all local minima of an unknown two-variable function (surface) Based on the practical experience, it is naturally assumed that a surface f(x, y) defined on rectangle within any bounded rectangle, the goal is to develop a method by which they could be efficiently found regardless of the by ba > and dc > (5) [ab ;; ]×[c d] rectangle area. For this reason, the rectangle should be broken Applied Computer Systems _________________________________________________________________________________________________2022/27 is sufficiently smooth (i. e., it does not contain piecewise dc − 33 ⋅− dc c+ d ( ) yy=+ = c+ = . (8) constant parts) and bounded. Whether surface f(x, y) is 4 44 continuous on rectangle (5) or not, this surface is supposed to have no points of jump discontinuity. In addition, it is supposed Alternatively, points (1) are calculated starting from the middle that the surface has an extremum on open rectangle point: ab + ax + x + b ab ;; × c d . (6) ( ) ( ) 2 2 , , , (9) x = x = x = 2 1 3 2 2 2 However, this supposition may be false and so the surface can and have no extrema on rectangle (5). The supposition about that the surface is a strictly unimodal two-variable function with an cd + cy + yd + 2 2 extremum inside rectangle (5) is not made. y = , y = , y = . (10) 2 1 3 2 2 2 IV. NINE-POINT RECTANGLE DICHOTOMY There are six inputs to algorithm #1: the interval endpoints Why two-by-two points are insufficient to be probed in a, b, c, d, a formula for evaluating surface f(x, y) at any point searching for a minimum is illustrated in Fig. 3. Therefore, the [ xy]∈ , and tolerance ε (a sufficiently small positive case with three-by-three points must be studied. Let these points number). x1, x2, x3, y1, y2, y3 be selected uniformly within rectangle (5) as a 3×3 lattice (Fig. 4): a x x x b 1 2 3 Fig. 3. Examples of when probing only two-by-two internal points leads to Fig. 4. Probing three-by-three internal points as a 3×3 uniform lattice. The cutting off the subrectangle containing the least value of a surface on a given endpoints of the initial rectangle and nodes of this lattice determine altogether rectangle (the surface value is proportional to the circle size, and the least value 16 equal subrectangles. is marked by square; the area which is cut off is darkened). At the first (initial) step of algorithm #1, the surface is ba −+ 3a b xa=+ = , 1 evaluated at points 4 4 ba − ba −+ a b xx=+ = a+ = ,  x y  for i = 1, 3 and j = 1, 3 (11) i j   4 22 ba − 33 ⋅− ba a+ b ( ) , (7) xx=+ = a+ = by (9), (10), and the minimum of nine values 4 44 fx , y (12) and { ( )} { ij } i=1 j=1 d −+ c 3cd yc=+ = , is found. Denote this minimum by z . Point 1 * 4 4 dc − dc −+ c d yy=+ = c+ = x y ∈  x y  (13) [ ] { } 4 22 ** { i j }   i=1 j=1 Applied Computer Systems _________________________________________________________________________________________________2022/27 at which this minimum is found is stored. Minimum value ax = , dy = (27) 2 2 z = fx , y is stored as well. Here, ∆= x x− x and ( ) * ** 2 1 and (9), (10) are re-specified; if ∆yy− y . While ∆x>ε and ∆y >ε , the following routine is executed. z = fx( , y ) (28) * 32 If then z = fx , y (14) ( ) * 11 ax = , c= y , dy = (29) 2 1 3 then and (9), (10) are re-specified; if bx = , dy = (15) 2 2 z = fx( , y ) (30) * 33 and (9), (10) are re-specified; if then (16) z = fx( , y ) * 12 ax = , c= y (31) 2 2 then and (9), (10) are re-specified. If bx = , c= y , dy = (17) 2 1 3 max fx , y = fx( , y ) (32) { ( )} and (9), (10) are re-specified; if { ij } 22 i=1 j=1 z = fx( , y ) (18) * 13 then algorithm #1 stops returning the specific answer implying that the area, apart from the local minimum, contains other then extrema. This answer is returned in the form of the empty set of surface minimum ( M = ∅ ) and four subrectangles bx = , c= y (19) 2 2 [ax ;; ]×[c y ] , [ax ;; ]×[ y d] , 22 2 2 and (9), (10) are re-specified; if x ;; b × cy , x ; b × y ; d . (33) [ ] [ ] [ ] [ ] 22 22 z = fx( , y ) (20) * 21 Otherwise, if (32) is false, the minimum of values (3) is found, values then {x ,, y z } (34) ax = , bx = , dy = (21) * ** 1 3 2 are stored, and new and are ∆= x x− x ∆yy− y and (9), (10) are re-specified; if 2 1 21 calculated. z = fx( , y ) (22) All the 10 cases of the routine conditions and their outcomes * 22 are illustrated in Fig. 5, where four subrectangles (33) followed then condition (32) are seen in the last subplot. Upon the other nine conditions, unlike using the golden-section search, by which the ax = , bx = , c= y , dy = (23) area-to-search is decreased by 1 3 1 3 and (9), (10) are re-specified; if ≈ 2.618 times, 5− 1 5− 1 6− 2 5 z = fx( , y ) (24) * 23 then the area-to-search by algorithm #1 becomes decreased by 4 times. This means that algorithm #1 may converge faster than ax = , bx = , c= y (25) 1 3 2 the golden-section search. Nevertheless, it cannot guarantee that the global minimum will be not omitted. and (9), (10) are re-specified; if In fact, algorithm #1 is a nine-point rectangle dichotomy running while ∆x>ε or ∆y >ε unless condition (32) turns z = fx , y (26) ( ) * 31 true. The returned output is either M ={x ,, y z } or M = ∅ * * ** * and four subrectangles (33). then = Applied Computer Systems _________________________________________________________________________________________________2022/27 Condition (14) Condition (16) Condition (18) Condition (20) Condition (22) Condition (24) Condition (26) Condition (28) Condition (30) Condition (32) Fig. 5. The 10 possible cases and their outcomes (the area which is cut off is darkened) issued from one of conditions (14), (16), (18), (20), (22), (24), (26), (28), (30), (32), where the least value is marked by square and the maximum by (32) is marked by circle. Applied Computer Systems _________________________________________________________________________________________________2022/27 Algorithm #1 itself is insufficient to find all minima of a then [ xy ] is deleted as it is too close to one of the four sides ** surface. Nevertheless, owing to third points along the axes of of the subrectangle (and, thus, the surface is supposed to have x and , it can sometimes outperform any two-by-two-point no “internal” minima on this subrectangle). Otherwise, search (at least by not returning a “false” local minimum and [ xy ] is stored along with fx( , y ) as a local minimum on ** ** simultaneously losing a “real” local minimum like in the subrectangle (40). If M = ∅ for subrectangle (40), then examples in Figs. 1–3). Subrectangles (33) that follow algorithm #1 is applied to the four subrectangles condition (32) are to be further studied whether each contains minima. a + b cd +     kk l l ac ;; × , (42) kl         V. RUNNING THE NINE-POINT RECTANGLE DICHOTOMY ON A SET OF SUBRECTANGLES a ++ b cd     kk l l ad ;; × , (43) kl     As the initial rectangle (5) may contain multiple local  22    minima, it is reasonable to break the initial rectangle into a set a ++ b cd     kk l l of equal subrectangles, for each of which a local minimum , (44) ;; bc × kl         would be found or the specific answer would be returned by algorithm #1. Each of the intervals [a; b] and [c; d] is to be a ++ b cd     kk l l ;; bd × . (45) broken into a set of equal-length subintervals. Let us denote the kl      22    number of such subintervals along each axis by N. Then there are N equal subrectangles. These subrectangles are For each of subrectangles (42)–(45) the eliminating-endpoint condition is checked. A minimum a ;; b × cd (35) {[ ] [ ]} { } kk l l k=1 l=1 ( leftbot ) ( leftbot ) , (46)  xy **  by if it is found on subrectangle (42), is deleted if ba − aa = , ba+ , b = b (36) 1 1 N ab + ( leftbot ) ( leftbot ) kk or or xa − <ε x − <ε * k * and cd + ( leftbot ) ( leftbot ) ll or ; (47) yc − <ε y − <ε * l * ba − ab = by ba + for kN = 2, (37) kk−1 kk ( leftbot ) ( leftbot ) otherwise, (46) is stored along with fx , y as a ( ) ** and local minimum on subrectangle (42). A minimum dc − ( lefttop) ( lefttop)  xy , (48) cc = , dc+ , dd = (38)  ** 1 1 N if it is found on subrectangle (43), is deleted if and ab + ( lefttop) ( lefttop ) kk dc − xa − <ε or x − <ε or * k * cd = by dc+ for lN = 2, . (39) ll−1 ll cd + ( lefttop) ll ( lefttop) y − <ε or yd − <ε ; (49) * * l There are seven inputs to algorithm #2 (which will incorporate algorithm #1): the interval endpoints a, b, c, d, a ( lefttop) ( lefttop) formula for evaluating surface f(x, y) at any point [ xy]∈ otherwise, (48) is stored along with as a fx , y ( ) ** , tolerance ε, the number of subintervals N. At the first step of local minimum on subrectangle (43). A minimum algorithm #2, algorithm #1 is applied to subrectangle ( rightbot ) ( rightbot )  xy  , (50)  **  a ;; b × cd for kN = 1, and lN = 1, . (40) [ ] [ ] kk l l if it is found on subrectangle (44), is deleted if If [ xy ] is found for subrectangle (40), i. e. M ≠∅ for ** * ab + this subrectangle, then the condition eliminating endpoints is ( rightbot ) ( rightbot ) kk x − <ε or xb − <ε or * * k checked. If cd + ( rightbot ) ( rightbot ) ll y − c <ε or y − <ε ; (51) xa − <ε or xb − <ε or * l * * k * k yc − <ε or yd − <ε (41) * l * l = Applied Computer Systems _________________________________________________________________________________________________2022/27 ( rightbot ) ( rightbot ) ( obs ) (b ) ( obs ) otherwise, (50) is stored along with fx , y as a SS = , S =  by  S , (60) ( ) { } * * ** * **  local minimum on subrectangle (44). A minimum ( obs ) (b ) ( obs ) ZZ = , Z = f by ,  Z . (61) { ( )} ** * ** ( righttop) ( righttop)  x y  , (52) **   () c Let us denote a local minimum of function f ( xc , ) by x . If if it is found on subrectangle (44), is deleted if () c f x , c < max fS( ) (62) ( ) ab + ( righttop) ** kk ( righttop) x − <ε or xb − <ε or * * k then point cd + ( righttop) ll ( righttop) y − <ε or yd − <ε ; (53) * * l () c [ xy]= x c (63)  * ( righttop) ( righttop ) otherwise, (52) is stored along with fx , y as a ( ) ** can be counted a local boundary minimum and local minimum on subrectangle (45). ( obs ) ( c ) ( obs ) At the second step of algorithm #2, all the local minima SS = , S = x c  S , (64)  { } ** ** *  found on N subrectangles (35) are aggregated into a set S ( obs ) ( c ) ( obs ) whose elements are sorted in ascending order. Let ZZ = , Z = fx , c  Z . (65) Z = fS( ) { ( )} ** ** ** * be a set of surface values at the local minima in . Then all () d local minima of functions Let us denote a local minimum of function f ( xd , ) by x . If f ay , , f by , , f xc , , f xd , ( ) ( ) ( ) ( ) () d f x , d < max fS (66) ( ) ( ) * * are found on intervals then point [cd ; ] , [cd ; ] , [ab ; ] , [ab ; ] , () d xy = x d (67) [ ]  * respectively. Let us denote a local minimum of function () a f ( ay , ) by y . If can be counted a local boundary minimum and () a f ay , < max f ( S ) (54) ( ) * * ( obs ) ( d ) ( obs ) SS = , S = xd  S , (68)  { } ** ** *  ( obs ) ( d ) ( obs ) then point ZZ = , Z = fx , d  Z . (69) { ( )} ** ** * () a [ xy]= a y (55)  At the end of the routine, algorithm #2 returns sets S and Z . * * can be counted a local boundary minimum and VI. COVERING A RANGE OF SUBRECTANGLE SETS Algorithm #2, if number N is sufficiently great, returns all ( obs ) ( a ) ( obs ) SS = , S =  a y  S , (56) { } ** *  * * the local minima of a surface, including its local boundary minima, if any, satisfying inequalities (54), (58), (62), (66). ( obs ) ( a ) ( obs ) ZZ = , Z = f ay ,  Z . (57) { ( )} ** * * * How to guess the sufficiently great number of subintervals? Inputting always very great numbers of subintervals into () b algorithm #2 will significantly slow down finding minima. Let us denote a local minimum of function f (by , ) by y . Inputting a fewer number of subintervals may lead to losing If some minima. Thus, it is reasonable to try a set of these numbers to see whether new minima appear as number N is increased. () b f by , < max f ( S ) (58) ( ) * * There are eight inputs to algorithm #3 (which will incorporate algorithm #2): the interval endpoints a, b, c, d, a then point formula for evaluating surface f(x, y) at any point [ xy]∈ , tolerance ε, the minimal number of subintervals N , the min () b [ x y]= by (59)  maximal number of subintervals N . At the first step of max algorithm #3, algorithm #2 is applied by NN = . While min can be counted a local boundary minimum and Applied Computer Systems _________________________________________________________________________________________________2022/27 ( j ,) h ( j−1,) h M = ∅ and NN < , the number of subintervals is * max max yy − <ε (76) * * hH =1, increased by 1 and algorithm #2 is applied again. If M = ∅ at NN = , then all local minima of surface f(x, y) along the max then algorithm #2 is applied by the number of subintervals 2N boundaries of rectangle (5) are found. First, whereas S = ∅ returning a set S denoted by W . If * * * and Z = ∅ , the set of local minima of function f(a, y) is found () j () a () aa () WS = (77) ** as Y , and updates (56), (57) are fulfilled ∀∈ yY . Then * ** () b the set of local minima of function f (by , ) is found as Y , by () bb () and updates (60), (61) are fulfilled ∀ y ∈ Y . The set of local ** () hh () () c , (78) W =  tu { } minima of function f(x, c) is found as X , and updates (56), * **  * h=1 () cc () (57) are fulfilled ∀∈ xX . The set of local minima of ** then it is checked whether those H values in sets (77) and (78) () d function f(x, d) is found as X , and updates (56), (57) are * are sufficiently close. Thus, if () dd () fulfilled ∀∈ x X . Finally, algorithm #3 stops returning ** ( h ) (, j h) max tx − <ε (79) ** sets S and Z . hH =1, * * Otherwise, if a nonempty set S is found at some N = N and (i. e., M ≠∅ at N = N ), a counter of the number of ( h ) (, j h) subintervals is set at 1: j = 1. Besides, the found sets S and Z * * max uy − <ε (80) ** hH =1, are stored indicating the counter: (1) (1) then algorithm #3 stops returning SS = , ZZ = . (70) ** ** (1 j− ) (1 jh − , ) (1 jh − , ) SS  x y , (81) { } ** * *  Then, at the second step of algorithm #3, the number of h=1 subintervals is increased by 1 and counter j is increased by 1, H H (1 j− ) (1 jh − , ) (1 jh −− , ) (1 jh , ) Z Z z fx , y . (82) whereupon algorithm #2 is applied by this new number of { } { ( )} ** * * * hh 11 subintervals N and In fact, algorithm #3 covering a range of algorithm #2 runs is () j () j S = S , Z = Z . (71) * * ** a nine-point iterated rectangle dichotomy. It is worth noting that algorithm #3 does not return the specific answer. The specific If answer is an internal object of algorithm #1 serving to divide a subinterval further. ( j ) ( j−1) (72) SS = If the task is to find the global minimum, then it is fulfilled ** as a trivial appendix to the nine-point iterated rectangle dichotomy. As sets (81), (82) are obtained, the global minimum by function value (1 j− ) (1 jh − , ) (1 jh − , ) S =  xy (73) { } * **  h=1 ( jh −1, ) zz = min (83) ** * hH =1, and is calculated and the global minimum point ( j ) (, j h) (, j h) S =  xy , (74) { } * **  h=1 ( jh −1, ) ( jh −1, ) [ xy ]∈  x y (84) { } ** **  * * h=1 then it is checked whether those H values in sets (73) and (74) are sufficiently close (or, being accurate to ε, are practically the at which same). Thus, if z = fx , y (85) ( ) ** ** ** ( j ,) h ( j−1,) h max xx − <ε (75) ** hH =1, is extracted. If the global minimum is to be found purely on open rectangle (6), then the global minimum function and value ( j−1) ( a ) (bc ) ( ) ( d ) z = min Z \ f a, y , f b, y , fx , c , fx , d { { ( ) ( ) ( ) ( )}} ** * * * * * () aa () () bb () () cc () () dd () ∀∈ yY and ∀ y ∈ Y and ∀∈ xX and ∀∈ x X (86) ** ** ** ** == = = = = = Applied Computer Systems _________________________________________________________________________________________________2022/27 is calculated and global minimum point (84) at which (85) holds genetic algorithm, specific surface instances are taken that have is extracted. multiple extrema [19]. The respective experiments confirm that the nine-point iterated rectangle dichotomy outperforms those VII. EXAMPLES and other approaches. An example of applying the nine-point iterated rectangle dichotomy is shown in Fig. 6, where a toy To compare performance of the suggested method to the surface performance of the golden-section search, ternary search, a local minimum a local minimum the global minimum found f xy , ( ) found by both found by the by the nine-point iterated the ternary search golden-section rectangle dichotomy but and genetic search “unseen” by the other algorithms algorithms Fig. 6. The 16 local minima (two of which are unseen from this view angle), including both the local boundary minima and the global minimum of surface (87) on rectangle (88) (the surface value axis is shown in reverse direction for easier observation of minima). π ππ      f xy , 1.2 sin 0.28x cos 0.238 y−+ sin 0.8x− cos 0.58 y−+ ( ) ( )      3 4 6      22 2 2 π π    0.0004⋅+ xx 11 0.0004⋅−17 0.0006⋅ y+12 0.0006⋅ y−24 ( ) ( ) ( ) ( ) +2 sin 0.33x− cos 0.22 ye −+ + e + e + e (87)     54   on rectangle minimized on rectangle (88) [−11; 17]×[−12; 24] −2; 8 × −1; 4 (90) [ ] [ ] is used to model the minimization problem. The 16 local where the surface (unbeknown to a researcher in reality) has a minima (including the global minimum) are found by N = 15, sort of modulation. This one is a more difficult example for i. e. by breaking the initial rectangle into 225 subrectangles, both the golden-section search and ternary algorithms. where initially N = 2 and N = 24. Except for the nine-point min max The 15 local minima (including the global minimum) are found dichotomy approach, none of the other algorithms finds the here by N = 37 (where N = 2 and N = 44). Therefore, the min max global minimum. Another example is presented in Fig. 7 for a algorithm takes here 1369 subrectangles to determine those 15 toy surface “stable” minima. Compared to the example in Fig. 6 with surface (87), the example in Fig. 7 with surface (90) f ( xy , )= can be thought as a more computationally expensive (despite π ππ roughly the same surface complexity and the number of local       sin sin 1.1x− sin 1.8 y−− cos 0.3xy (89) ( )       minima).   8  38   = Applied Computer Systems _________________________________________________________________________________________________2022/27 a local minimum found by the global minimum found both the ternary search by the nine-point iterated and golden-section search rectangle dichotomy and algorithms genetic algorithm f ( xy , ) Fig. 7. The 15 local minima (one of which is unseen from this view angle), including both the local boundary minima and the global minimum of surface (89) on rectangle (90) (the surface value axis is shown in reverse direction for easier observation of minima). VIII. DISCUSSION the respective surfaces are defined). Then set S lacks some The presented approach is a significant and important minima. If a researcher deals with presumably highly fluctuating surfaces (data), the part in algorithm #3 can be contribution to the field of numerical estimation and approximate analysis. It is clear that it is applicable to slightly modified: if inequalities (75) and (76) hold then algorithm #2 can be applied by the number of subintervals mN , whichever task of finding local extrema. If the purpose is to find j all local maxima or the global maximum of the surface, the where m > 2. Reversely, studying rare-fluctuating surfaces (data) may be more efficient if, for instance, m = 1.5 or about presented approach is applied to surface −f(x, y). In real-world that. contemporary practice, it must serve as a computationally It is worth noting that if the ratio of lengths of intervals [a; b] efficient tool for optimization tasks where exact and full-scale and [c; d] is too far from 1, the initial rectangle (5) is too numerical methods are inapplicable. In addition, the fact of that stretched. Then, obviously, any subrectangle will be of the same often the objective function or surface is defined on a (known) (bad) aspect ratio. A problem may arise such that some local discrete set (e. g., in searching for an optimal size). Using a minima along the dimension of the longer interval (or, in simple numerical method always implies that the function (surface) is words, along the “stretched” dimension) may remain “unseen” considered a finite set of values corresponding to a finite set of by algorithm #3. To avoid this situation, it is recommended to points to which those values are assigned. The examples are the keep the rectangle aspect ratio close to 1:1. However, if the problem of fine-tuning hyperparameters and training surface fluctuates rarely along a dimension, this dimension can parameters of neural networks, adjusting parameters of be left “stretched” (i. e., a much shorter interval in the other complex systems (like radars, radio telescopes, massive dimension can be considered). engines, big-scaled constructions, etc.), and optimal If the two-variable function is unimodal, the nine-point parametrization at all. Without knowing additional information about the surface, iterated rectangle dichotomy is slower than the golden-section unfortunately, there cannot be assurance of that every local search. The slowdown is relatively significant if to measure the minimum will be included into the output set S . Doubling the computational time for a long series of the minimization problems. Nevertheless, the function unimodality is a rare number of subintervals along each dimension (axis) to see occasion in real-world practice. Even if not all local minima (or whether the same minima remain, when (77) and (79), (80) are maxima) are to be found, the golden-section search may fail to expected to be true, may fail in a case if the surface (strictly “hit” the global minimum (or maximum), whereas the nine- speaking, the data) is overly fluctuating (something similar to Figs. 6 and 7, if to consider much larger rectangles, on which point iterated rectangle dichotomy completes the task. Applied Computer Systems _________________________________________________________________________________________________2022/27 [12] S. Edelkamp and S. Schrödl, “Chapter 7 – Symbolic search,” in Heuristic IX. CONCLUSION Search, S. Edelkamp, S. Schrödl, Eds. Morgan Kaufmann, 2012, pp. 283– The suggested method of nine-point iterated rectangle 318. https://doi.org/10.1016/B978-0-12-372512-7.00007-9 [13] Ş. E. Amrahov, A. S. Mohammed, and F. V. Çelebi, “New and improved dichotomy is an approach to finding all local extrema of an search algorithms and precise analysis of their average-case complexity,” unknown two-variable function bounded on a given rectangle Future Generation Computer Systems, vol. 95, pp. 743–753, Jun. 2019. regardless of the rectangle area. The five of eight inputs of the https://doi.org/10.1016/j.future.2019.01.043 [14] G. S. Rani, S. Jayan, and K. V. Nagaraja, “An extension of golden section method routine are straightforwardly defined, whereas the algorithm for n-variable functions with MATLAB code,” in IOP Conf. tolerance with the minimal and maximal numbers of Series: Materials Science and Engineering, IOP Publishing, 2019, subintervals along each dimension are adjustable. These vol. 577, Art. no. 012175. subinterval numbers are primary adjustable inputs. Although https://doi.org/10.1088/1757-899X/577/1/012175 [15] A. Kheldoun, R. Bradai, R. Boukenoui, and A. Mellit, “A new Golden the method does not assure obtaining all local minima (or Section method-based maximum power point tracking algorithm for maxima) for any surface, setting appropriate minimal and photovoltaic systems,” Energy Conversion and Management, no. 111, pp. maximal numbers of subintervals makes missing some minima 125–136, Mar. 2016. https://doi.org/10.1016/j.enconman.2015.12.039 (or maxima) very unlikely. The tolerance is the secondary [16] J.-D. Lee, C.-H. Chen, J.-Y. Lee, L.-M. Chien, and Y.-Y. Sun, “The adjustable input. If the minimal and maximal numbers of Fibonacci search for cornerpoint detection of two-dimensional images,” subintervals are selected too small, setting whichever small Mathematical and Computer Modelling: An International Journal, vol. 16, no. 11, pp. 15–20, Nov. 1992. tolerance cannot help in finding every local extremum. https://doi.org/10.1016/0895-7177(92)90102-Q The endpoints of the initial intervals constituting the [17] L. D. Chambers, Ed. The Practical Handbook of Genetic Algorithms: rectangle and a formula for evaluating the surface at any point Applications (2nd edition). Chapman and Hall/CRC, 2000. https://doi.org/10.1201/9781420035568 of this rectangle are inputted along with the three adjustable [18] D. E. Goldberg, Genetic Algorithms in Search, Optimization & Machine inputs. Having broken the initial rectangle into a set of Learning. Addison-Wesley, 1989. subrectangles, the nine-point iterated rectangle dichotomy [19] R. Horst and P. M. Pardalos, Eds., Handbook of Global Optimization. “gropes” around every local minimum by successively cutting Nonconvex Optimization and Its Applications, vol. 2. Springer, Boston, MA, 1995. https://doi.org/10.1007/978-1-4615-2025-2 off 75 % of the subrectangle area or dividing the subrectangle [20] F. Neri, G. Iacca, and E. Mininno, “Compact optimization” in Handbook in four. A range of subrectangle sets defined by the minimal and of Optimization. From Classical to Modern Approach, I. Zelinka, V. maximal numbers of subintervals along each dimension is Snášel, and A. Abraham, Eds. Springer-Verlag Berlin Heidelberg, 2013, pp. 337–364. https://doi.org/10.1007/978-3-642-30504-7_14 covered by running the nine-point rectangle dichotomy on [21] C. Thornton, F. Hutter, H. H. Hoos, and K. Leyton-Brown, “Auto- every set of subrectangles. As a set of values of currently found WEKA: Combined selection and hyperparameter optimization of local minima points changes no more than by the tolerance, the classification algorithms,” in KDD’13 Proceedings of the 19th ACM SIGKDD international conference on knowledge discovery and data set of local minimum points and the respective set of minimum mining, Chicago, Illinois, USA, Aug. 2013, pp. 847–855. values of the surface are returned. https://doi.org/10.1145/2487575.2487629 [22] H. Cai, J. Lin, and S. Han, “Chapter 4 – Efficient methods for deep REFERENCES learning,” in Computer Vision and Pattern Recognition. Advanced Methods and Deep Learning in Computer Vision, E. R. Davies and M. A. [1] M. L. Lial, R. N. Greenwell, and N. P. Ritchey, Calculus with Turk, Eds. Academic Press, 2022, pp. 159–190. Applications (11th edition). Pearson, 2016. https://doi.org/10.1016/B978-0-12-822109-9.00013-8 [2] L. D. Hoffmann, G. L. Bradley, and K. H. Rosen, Applied Calculus for [23] J. Waring, C. Lindvall, and R. Umeton, “Automated machine learning: Business, Economics, and the Social and Life Sciences. McGraw-Hill Review of the state-of-the-art and opportunities for healthcare,” Artificial Higher Education, 2005. Intelligence in Medicine, vol. 104, Art. no. 101822, Apr. 2020. [3] S. A. Vavasis, “Complexity issues in global optimization: A survey,” in https://doi.org/10.1016/j.artmed.2020.101822 Handbook of Global Optimization. Nonconvex Optimization and Its Applications, vol. 2, R. Horst and P. M. Pardalos, Eds. Springer, Boston, Vadim V. Romanuke was born in 1979. He graduated from the Technological MA, 1995, pp. 27–41. https://doi.org/10.1007/978-1-4615-2025-2_2 University of Podillya in 2001. In 2006, he received the Degree of Candidate [4] J. Stewart, Calculus: Early Transcendentals (6th edition). Brooks/Cole, of Technical Sciences in Mathematical Modelling and Computational Methods. The Candidate Dissertation suggested a way of increasing interference noise [5] E. Hewitt and K. R. Stromberg, Real and Abstract Analysis. Springer, immunity of data transferred over radio systems. The degree of Doctor of 1965. https://doi.org/10.1007/978-3-642-88044-5 Technical Sciences in Mathematical Modelling and Computational Methods [6] K. R. Stromberg, Introduction to Classical Real Analysis. Wadsworth, was received in 2014. The Doctor-of-Science Dissertation solved a problem of increasing efficiency of identification of models for multistage technical control [7] R. Fletcher, Practical Methods of Optimization (2nd edition). J. Wiley and and run-in under multivariate uncertainties of their parameters and Sons, Chichester, 1987. relationships. In 2016, he received the status of a Full Professor. [8] J. Kiefer, “Sequential minimax search for a maximum,” Proceedings of He is a Professor of the Faculty of Mechanical and Electrical Engineering at the the American Mathematical Society, vol. 4, no. 3, pp. 502–506, 1953. Polish Naval Academy. His current research interests concern decision making, https://doi.org/10.2307/2032161 game theory, semantic image segmentation, statistical approximation, and [9] M. Avriel and D. J. Wilde, “Optimality proof for the symmetric Fibonacci control engineering based on statistical correspondence. He has 395 scientific search technique,” Fibonacci Quarterly, no. 4, pp. 265–269, 1966. articles, one monograph, one tutorial, and four methodical guidelines in [10] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Functional Analysis, Mathematical and Computer Modelling Master Thesis “Minimization or maximization of functions,” in Numerical Recipes: The development, Conflict-Controlled Systems, Master Academic Practice. Before Art of Scientific Computing (3rd edition), Cambridge University Press, January 2018, Vadim Romanuke was the scientific supervisor of a Ukrainian New York, 2007, pp. 487–562. budget grant work concerning minimization of water heat transfer and [11] K. J. Overholt, “Efficiency of the Fibonacci search method,” BIT consumption. Numerical Mathematics, vol. 13, no. 1, pp. 92–96, Mar. 1973. Address for correspondence: 69 Śmidowicza Street, Gdynia, Poland, 81-127. https://doi.org/10.1007/BF01933527 E-mail: romanukevadimv@gmail.com ORCID iD: https://orcid.org/0000-0003-3543-3087

Journal

Applied Computer Systemsde Gruyter

Published: Dec 1, 2022

Keywords: Finding extrema; local minima; rectangle dichotomy; subrectangles; unknown two-variable function

There are no references for this article.