Remember this golden rule:
Always and always perform normalization on your data before feeding it to ML / DL algorithm.
Reason being, your columns have different range, probably one column has a range of [10000,20000] and other has [4000,5000] when you will plot these coordinates on a graph, they will be very very far away, Clustering/Classification will never work, maybe Regression will. Scaling brings the range of each of the column to same level but still maintaining the distance but with different scale. It is just like in google MAPS, when you zoom in scale decrease and when you zoom out scale increases.
You are free to choose the normalization algorithm, there are almost 20-30 available on sklearn.
Edit:
Use this code:
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
scaler.fit(X)
X_norm = scaler.transform(X)
from sklearn.cluster import DBSCAN
dbscan = DBSCAN(eps=0.05, min_samples = 3,leaf_size=30)
clusters = dbscan.fit_predict(X_norm)
np.unique(dbscan.labels_)
array([-1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32,
33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47])
What I found that as DBSCAN is a density based approach and I Tried sklearn normalizer(from sklearn.preprocessing import normalize) which basically converts into gaussian distribution, but it didn't work and it should not in case of DBSCAN as it requires each feature to have similar density.
So, I went with MinMax scaler as it should turn each features density similar and One thing to note, that as your data points after scaling, are less than 1, one should use epsilon in the similar range as well.
Kudos :)