## Big-O Made Trivial

In this post: Craig tries to convince you to treat asymptotic complexity as a relation, instead of like sets.

### Awkward-O

Big-O notation is one of my pet peeves. It’s a perfect example of complicating something simple.

Even ignoring the fact that Big-O notation is hard to learn, because instead of looking like other notation it uses totally arbitrary symbols, it is clearly at odds with how people think about complexities. Have you ever seen someone write , as if a function could be equal to a set of functions, instead of ? How about instead of ? I think the reason people write these (literally false) statements is because they’re thinking in terms of *comparisons*, whereas Big-O notation is formulated in terms of sets.

When a person writes , they almost certainly didn’t intend to say “The set of functions asymptotically upper bounded by is the same as the set of functions asymptotically upper bounded by .”. That’s trivially false. They meant linear costs are asymptotically upper bounded by quadratic costs, but ended up misusing/abusing Big-O notation instead.

(I suppose I can’t actually speak for “people”, but personally I do think about asymptotic complexity as a relation.)

Given that we’re thinking of asymptotic complexity in terms of comparisons, why not use notation based on that?

### Asymptotic Comparison Notation

So we want to represent asymptotic complexity as a relation. How does that work, exactly?

Well, we’ll use the same symbols we already use for relations and comparisons (, , , , and ) but slightly modify them to indicate asymptotic-ness. My personal (and arbitrary) preference is to indicate the asymptotic-ness by circling the operator. I represent “asymptotically less than” as , “asymptotically equal to” as , and so forth.

Here’s a table that shows each operator beside its English description, its analogue in Big-O notation, and its definition in terms of a proportional limit:

Do you see how has a definition that is the inverse of the definition of its inverse, ? How ‘s definition is equivalent to satisfying either ‘s definition or ‘s definition (because it’s greater than OR equal)? How there’s no confusion about what’s lower bounding and what’s upper bounding due to the re-use of existing symbols?

Doesn’t that just… make sense?

### Examples

Here’s a few examples of using the asymptotic comparison operators as well as their limit definitions:

- Buying a computer with a faster processor doesn’t affect the complexity of an algorithm:
, where , because .

- Linear algorithms are asymptotically less costly than quadratic algorithms:
because .

- Running two algorithms, one after the other, is at least as asymptotically expensive as one of them individually:
, where and are positive, because .

- Logarithmic algorithms are asymptotically less costly than polynomial algorithms (using L’Hôpital’s rule):
, for any , because

- Chaining:

### Summary

Comparison operators are a natural fit when thinking about asymptotic complexity. Personally, I use the existing comparison operators and circle them to indicate the comparison is asymptotic.

—

### Discuss on Reddit

—

### My Twitter: @CraigGidney

—

## 6 Responses to “Big-O Made Trivial”

Twisted Oak Studios offers consulting and development on high-tech interactive projects. Check out our portfolio, or Give us a shout if you have anything you think some really rad engineers should help you with.

## Archive

- strilanc
- What Quantum Computers Do Faster, with Caveats
- Naming Things: Fail-Useful
- Detecting Simple Cycles Forming, Faster
- Third Party Bit Commitment
- Angular Velocity is Simple
- Collection Equality is Hard
- Deadlocks in Practice: Don’t Hold Locks While Notifying
- Brute Force Parallelization
- A Year’s Worth of Opinions about Objective-C
- Referencing Substrings Faster, without Leaking Memory
- Not Crying Over Old Code
- Exploring Universal Ternary Gates
- Impractical Experiments #2: Securing Peer to Peer Fog of War against Map Hacks
- Achieving Exponential Slowdown by Enumerating Twice
- Using Immortality to Kill Accidental Callback Cycles
- Cancellation Tokens (and Collapsing Futures) for Objective-C
- Visualizing the Eigenvectors of a Rotation
- Collapsing Futures in Objective-C
- Bug Hunting #1: Garbled Audio from End to End
- Impractical Experiments #1: Representing Numbers as Polynomials
- Implementing Quantum Pseudo-Telepathy
- Turn On Your Damn Warnings
- Big-O Made Trivial
- Unfathomable Bugs #7: The Broken Oven
- Solomonoff’s Mad Scientist
- Yearly Blogging Roundup #1
- What isn’t a Monad
- Searching a Sorted Matrix Faster
- How to Read Nested Ternary Operators
- Making Sublime Text 2 Jump to the Correct Line with Unity on OS X
- My Bug, My Bad #4: Reading Concurrently
- Whole API Testing with Reflection
- Optimizing a Parser Combinator into a memcpy
- Don’t Treat Paths Like Strings
- Breaking a Toy Hash Function
- Counting Iterators Lazily
- Unfathomable Bugs #6: Pretend Precision
- My Bug, My Bad #3: Accidentally Attacking WarCraft 3
- Collapsing Types vs Monads (followup)
- Collapsing Futures: Easy to Use, Hard to Represent
- Eventual Exceptions vs Programming in a Minimal Functional Style
- The Mystery of Flunf
- Explain it like I’m Five: The Socialist Millionaire Problem and Secure Multi-Party Computation
- Computer Science Blows My Mind
- A visit to Execution Labs in Montréal
- Transmuting Dice, Conserving Entropy
- Rule of Thumb: Ask for the Clock
- Rule of Thumb: Use Purposefully Weakened Methods
- Rule of thumb: Preconditions Should be Checked Explicitly
- Intersecting Linked Lists Faster
- Mouse Path Smoothing for Jack Lumber
- My Bug, My Bad #2: Sunk by Float
- Repeat Yourself Differently
- Grover’s Quantum Search Algorithm
- Followup to Non-Nullable Types vs C#
- Optimizing Just in Time with Expression Trees
- When One-Way Latency Doesn’t Matter
- Determining exactly if/when/where a moving line intersected a moving point
- Emulating Actors in C# with Async/Await
- Making an immutable queue with guaranteed constant time operations
- Improving Checked Exceptions
- Perishable Collections: The Benefits of Removal-by-Lifetime
- Decoupling shared control
- Decoupling inlined UI code
- Linq to Collections: Beyond IEnumerable<T>
- Publish your .Net library as a NuGet package
- When null is not enough: an option type for C#
- Unfathomable Bugs #5: Readonly or not
- Minkowski sums: examples
- My Bug, My Bad #1: Fractal Spheres
- Working around the brittle UI Virtualization in Windows 8
- Encapsulating Angles
- Unfathomable Bugs #4: Keys that aren’t
- How would I even use a monad (in C#)?
- Useful/Interesting Methods #1: Observable.WhenEach
- Unfathomable Bugs #3: Stringing you along
- Anonymous Implementation Classes – A Design Pattern for C#
- Tasks for ActionScript 3 – Improving on Event-Driven Programming
- Minkowski sums and differences
- Non-Nullable Types vs C#: Fixing the Billion Dollar Mistake
- Unfathomable Bugs #2: Slashing Out
- Script templates and base classes
- Unity font extraction
- Abusing “Phantom Types” to Encode List Lengths Into Their Type
- Constructive Criticism of the Reactive Extensions API
- Quaternions part 3
- Quaternions part 2
- Quaternions part 1
- Unfathomable Bugs #1: You can have things! You can have things IN things! You can have …
- Coroutines – More than you want to know
- Asset Bundle Helper
- The Visual Studio goes away
- .Net’s time traveling StopWatch
- Introducing Catalyst

Unicode is with you 60% of the way: ?, ?, ?

What, and lose the fun of typing out “Large textcircled{normalsize $hspace{0.05 mm} <$} normalsize” every time?

(I did actually find those. The problem, besides the missing =, is that lots of unicode characters render inconsistently. In firefox I can see all three characters in your post. In chrome, two of them are replaced by squares. I didn’t want people seeing mis-rendered characters.)

I’m in chrome on OSX and I can see all the characters. Since the job of the browser is to render unicode characters, I am surprised to find they can be inconsistent?

Simple, elegant and smart, I might use it in the future to wow collegues. 😉

I can see arguments for and against replacing = by subset , but the relation symbols you suggest are much less flexible than big O notation. For example, try writing cosh(O(1/n))^n = (1+O(1/n^2))^n = 1+O(1/n) in your preferred notation. When one side of the equation is nothing but a big O symbol, of course, you are right; the power comes when you need to embed the big O inside something else.

Yes, this was mentioned in the reddit discussion as well. The asymptotic relational operators are easier to learn and easier to use when they apply, but big-O is more extensible. For example, I particularly like the definition of polynomial time as P = n^O(1).