When I started writingÂ *The Imposterâ€™s Handbook*, this was the question that was in my head from the start:Â *what the f*** is Big O and why should I care?*Â I remember giving myself a few weeks to jump in and figure it out but, fortunately, I found that it was pretty straightforward after putting a few smaller concepts together.

* Big O is conceptual*. Many people want to qualify the efficiency of an algorithm based on the number of inputs. A common thought is

*if I have a list with 1 item it canâ€™t be O(n) because thereâ€™s only 1 item so itâ€™s O(1)*. This is an understandable approach, but

**Big O is a**, itâ€™s not a benchmarking system. Itâ€™s simply using math to describe the efficiency of what youâ€™ve created.

*technical adjective** Big O is worst-case*, always. That means that even if you think youâ€™re looking for is the very first thing in the set, Big O doesnâ€™t care, a loop-based find is still considered O(

*n*). Thatâ€™s because Big O is just a descriptive way of thinking about the code youâ€™ve written, not the inputs expected.

## THERE YOU HAVE IT

I find myself thinking about things in terms of Big O a lot. The cart example, above, happened to me just over a month ago and I needed to make sure that I was flexing the power of Redis as much as possible.

I donâ€™t want to turn this into a Redis commercial, but I will say that it (and systems like it) have a lot to offer when you start thinking about things in terms of *time complexity*, which you should! **Itâ€™s not premature optimization to think about Big O upfront, itâ€™s *** programming* and I donâ€™t mean to sound snotty about that! If you can clip an O(

*n*) operation down to O(

*log n*) then you should, donâ€™t you think?

So, quick review:

- Plucking an item from a list using an index or a key: O(1)
- Looping over a set of
*n*items: O(*n*) - A nested loop over
*n*items: O(*n^2*) - A divide and conquer algorithm: O(
*log n*)