python - What is the runtime complexity (big O) of the following pseudocode? -


i had very, intense debate runtime complexity of super simple algorithm colleague of mine. in end both agreed disagree i've been thinking this, it's challenged basic understanding of computer science fundamentals , therefore must additional insight on matter.

given following python, big-o runtime complexity:

for c in "how today?":     print c 

now, called out on order of o(n) aka linear. meaning it's dependent on length of string therefore loop grow linearly length of string grows.

my colleague said, "no, it's constant because know set of strings dealing (in our case), max string 255 characters long (in our case), therefore must constant." followed on saying "because have max upper-bound on character length of string results in o(255) reduces o(1)."

anyways, went , fourth , after 45 minutes of both of drawing sketches both dead-locked on issue.

my question in world or math system loop above constant time loop? if knew our upper-bound 1,000,000 characters , set of strings anywhere 0 1,000,000 loop exhibit linear running times depending on size of string.

i additionally asked him if thinks following code o(1) if upper-bound size of n known. meaning code ever operate on max upper-bound of 255 characters:

s = "how today?" c in s:     d in s:         print c+d 

he said constant time....even after explained o(n^2) algorithm , demonstrated following code produce quadratic curve.

so, missing theoretical concept of above true depending on how theory goes? just clear understanding correct if n not known. if upper-bound of n known asserting 2 algorithms on post both of constant runtime complexity.

just looking maintain sanity, perhaps if i'm wrong there's additional learning can benefit from. good, colleague convincing. also, if has additional links or material on subject specific question please add comments.

applying big-o notation single scenario in inputs known ludicrous. there is no big-o single case.

the whole point worst-case estimate arbitrarily large, unknown values of n. if know exact answer, why on earth waste time trying estimate it?

mathy / computer-sciencey edit:

big-o notation defined n grows arbitrarily large: f(n) o(g(n)) if g(n) ≥ c * f(n), constant c, for n greater nmin. meaning, "opponent" can set c "eleventy-quadjillion" , doesn't matter, because, points "to right" of point nmin, graph of "eleventy-quadjillion times f(n)" lag below g(n)... forever.

example: 2n less than or equal n2... short segment of x-axis includes n = 2, 3, , 4 (at n = 3, 2n 8, while n2 9). doesn't change fact big-o relationship opposite: o(2n) greater o(n2), because big-o says nothing n values less nmin. if set nmin 4 (thus ignoring graph left of 4), you'll see n2 line never exceeds 2n line.

if "opponent" multiplies n2 larger constant c raise "his" n2 line above 2n line, haven't lost yet... slide nmin right bit. big-o says no matter how big makes c, can always find point after equation loses , yours wins, forever.

but, if constrain n on right, you've violated prerequisites kind of big-o analysis. in argument co-worker, 1 of invented nmax, , other set nmin somewhere right of --- surprise, results nonsensical.

for instance, first algorithm showed indeed n work inputs of length n... in general case. if building own algorithm called n times, have consider mine quadratic o(n2) algorithm... again, in general case.

but if prove never call algorithm input greater 10 (meaning had more information, , estimate algorithm more precisely), using big-o estimate algorithm's performance throwing away i'd learned actual behavior in case care about. should instead replace algorithm suitably large constant --- changing my algorithm c * n2 c * 10 * n... cbigger * n. claim algorithm linear, because in case, algorithm's graph never rise above constant value. change nothing big-o performance of algorithm, because big-o not defined constrained cases this.

to wrap up: in general, first algorithm showed linear big-o standards. in constrained case, maximum input known, mistake speak of in big-o terms @ all. in constrained case, could legitimately replaced constant value when discussing big-o behavior of some other algorithm, says absolutely nothing big-o behavior of first algorithm.

in conclusion: o(ackermann(n)) works fine when nmax small enough. very, small enough...


Comments