Simulate long-exposure photography with OpenCV

January 13, 2013

March 01, 2014

Long-exposure photography is a technique to capture dynamic scenes, which produces a contrast between its static and moving elements. Those parts of the scene which were in motion will appear blurred, creating a nice effect.

Below is a long-exposure shot of a stream I took recently. It is technically not a long-exposure photograph, but a simulation; this image was actually generated from a video recording taken with an old iPod, which was then processed in software into a single image.

A long-exposure photograph of a stream.

(Forgive the poor quality, I don’t own a good camera. Nonetheless, this image demonstrates the desired effect.)

In this post, I describe a simple method to generate images like the ones above from video, and present an implementation as a Python script, which you can use to create your own simulated long-exposure photographs.


Let a photograph be a matrix of values representing measures of light in two dimensions, at some point in time:

These values $\mathbf{p}_{i,j}$ can be either scalar, as in the case of monochromatic photographs, or multi-dimensional (vectors), as in the case of colour photographs. It’s practical to think of these as pixels.

Now consider that a photograph is a capture of a scene projected through a camera’s lens at a specific point in time. The matrix $P$ is just one in an infinite series. A long-exposure photograph is the total of a number of such matrices within a given time window $\Delta$, determined by the exposure time. That is, the matrix $P^{(\Delta)}$ representing a long-exposure photograph is given by:

Now consider a series of photographs (or frames) taken in quick succession. If we allow that the very short interval between frames is an infinitesimal in the limit, then we can express a long-exposure matrix $P^{(\Delta)}$ as a finite sum. Thus, let our simulated long-exposure matrix be given by $P^{(\Delta)} = \sum_{i = 1}^n P_i \times d$, where $P_i = P(t_i)$ are discrete values of the function $P$ above, and $n$ is the number of frames. The infinitesimal $dt$ has become the constant $d$, given by the length of the exposure interval $\Delta$ divided by the number of frames. Thus,

The term in brackets in the far right is simply the average of the set of matrices $\left\{ P_i \right\}$. In other words, we can simulate a long-exposure photograph by averaging many photographs taken in relatively quick succession — such as the frames of a video. The concept is rather simple, and can be implemented in software.


Hipshot is a Python script which converts a video file into a single image file simulating a long-exposure photograph, using the method described above.1

Use it like so:

$ hipshot /path/to/foo.mp4

Or drag-and-drop the video file onto the script’s icon. Hipshot will create the image file in the same directory as the video file.

Note that this implementation requires that the camera remain perfectly immobile during the entirety of the scene! It’s best to use a tripod, or at least set the camera down while it’s recording.

Hipshot uses the Python bindings of the OpenCV library.


Real long-exposure photography requires the proper equipment,2 which can be expensive, so the ability to simulate this style of photography makes it much more accessible. Now anyone can create long-exposure photographs like a hipster, without an expensive camera!

I’ve found this technique to be especially useful in low-light, rainy, or otherwise noisy environments; the noise picked up by the camera seems to be normally distributed, and so averages to zero. This technique also has more interesting applications than art. Astronomers have used a similar procedure to subtract out noise caused by cosmic radiation from photographs taken in space.3

Please leave a comment below if you have any questions or feedback. If you do create simulated long-exposure shots using Hipshot, feel free to share.

Update: Note that the repository has moved from Google Code to Bitbucket. The latest version is also available from PyPI.

  1. This implementation differs from the method described in that the exposure time factor $\Delta$ is normalized to a value of one. However, the effect is the same.

  2. For video tutorials on traditional long-exposure photography, see: Long exposure tutorial with Scott Kelby and Long exposure photography tutorial on Youtube.

  3. An average of many exposures taken when the shutter is closed is subtracted from the raw data from the CCD image sensors to produce a more accurate photograph. This is called “dark-frame subtraction.”

← Return to blog index