Generally, the technique is to blur the image so that small-scale details become irrelevant and only large-scale differences in the background illumination are retained. You then subtract the blurred image from the original to remove the uneven illumination, leaving only the localised features visible.
My preferred tool is ImageMagick, but the principle is the same in OpenCV. Here I clone your original image, blur it over 8 pixels and then subtract the blurred image from the original:
convert http://s8.postimg.org/to03oxzyd/example_image.png ( +clone -blur 0x8 ) -compose difference -composite -auto-level out.jpg
And here I blur over 32 pixels, and subtract the blurred image from the original:
convert http://s8.postimg.org/to03oxzyd/example_image.png ( +clone -blur 0x32 ) -compose difference -composite -auto-level out32.jpg
Keywords: Image processing, unsharp mask, uneven lighting.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…