In your case this page has wrong utf-8 data which confuses BeautifulSoup and makes it think that your page uses windows-1252, you can do this trick:
soup = BeautifulSoup.BeautifulSoup(content.decode('utf-8','ignore'))
by doing this you will discard any wrong symbols from the page source and BeautifulSoup will guess the encoding correctly.
You can replace 'ignore' by 'replace' and check text for '?' symbols to see what has been discarded.
Actually it's a very hard task to write crawler which can guess page encoding every time with 100% chance(Browsers are very good at this nowadays), you can use modules like 'chardet' but, for example, in your case it will guess encoding as ISO-8859-2, which is not correct too.
If you really need to be able to get encoding for any page user can possibly supply - you should either build a multi-level(try utf-8, try latin1, try etc...) detection function(like we did in our project) or use some detection code from firefox or chromium as C module.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…