Given a problem statement, we tend to find out ways to solve the problem – Our focus is understanding the problem statement and solving it. But are we solving the problem Efficiently is my question. As Software Engineers we don’t just write code, we write an efficient code. When I say “Efficient” I not only consider following the coding standards, but also I look at how effective my code is in terms of Time and Memory resource utilization, as we all know that they are both important and expensive resources.
So I remember few years back when I was learning Data Structures and I wanted to understand how much time the Linear Search algorithm takes to search a particular element in a list, and then do a comparative analysis with Binary Search. So I wrote a piece of code to measure time take by Linear search, the code looked something like this:
start_time=time()
run_linear_search()
end_time=time()
total_time = start_time = end_time
It did work fine for me, because I just wanted to know the time taken by the algorithm. I really did not ask my self questions like:
- Where there other processes running on the system?
- What about Quantization Error?
- Should I consider CPU Cycles as alternative to using the system timer?
It was later I realized the importance of these question, and I started digging in. When i ran the above piece of code on different systems,the output was always different for the same input. Reason being:
- Other processes running on the system simultaneously, the actual time taken by the algorithm will depend on the resources (CPU, Memory) consumed by the other processes.
There was another big problem, the module I used in python was time.time and time.clock, and time.time returns time since epoch(the point where the time starts). One concerning thing was time.clock command gave wall clock output in windows and CPU time spent in UNIX. Code looked something like this:
import time
print(time.time())
>>1543161580.9558775
- Why this happens on widows? So Quantization error, I work on a windows x64 system and the timer in the system updates 18.2 times per second, So there is time difference of almost 0.055 sec( which is a loss to quantization error).
Then I thought to my self lets switch to UNIX (Open Source ROCKS) , module used was time.clock(). As now I was considering CPU cycles and see how it works. Well, it worked fine, But soon i realized for identical algorithms with identical inputs measure was not consistent.
Not much later I found the better way to measure time taken by my algorithm was Python’s timeit module – This module just save the time before and after the execution of code and subtract them. Syntax looks something like this: Timer
(stmt=’pass’, setup=’pass’, timer=<timer function>, globals=None)
Now i am sticking to timeit(). I will be using timeit to measure the run time of an algorithm in the following posts. Stay tuned in next I will discuss about memory. Happy Coding!
Hey Dimple, that was helpful. Thank you.
LikeLike
Thanx Samuel
LikeLike