Currently my datasets contain 161 folders with 500 data(.img) inside each folder. Total = 80500 data
Is there any code I can change? Currently stuck in the process of split into Train/Valid/Test and save it.
The below shows the code of loading of my 161 folders datasets
import os
import numpy as np
import cv2
import glob
folders = glob.glob('C:/Users/Pc/Desktop/datasets/*')
imagenames_list = []
for folder in folders:
for f in glob.glob(folder+'/*.jpg'):
imagenames_list.append(f)
read_images = []
for image in imagenames_list:
read_images.append(cv2.imread(image, cv2.IMREAD_GRAYSCALE))
images = np.array(read_images)
The below code shows how am i split the data into 60% train / 20% test / 20% valid.
Am i proceed with correct and the train/test/valid able to link to my datasets? How can i store them into a pickle file?
from sklearn.model_selection import train_test_split
X, y = np.random.random((80500,10)), np.random.random((80500,))
p = 0.2
new_p = (p*y.shape[0])/((1-p)*y.shape[0])
X, X_val, y, y_val = train_test_split(X, y, test_size=p)
X_train, X_test, y, y_test = train_test_split(X, y, test_size=new_p)
print([i.shape for i in [X_train, X_test, X_val]])
question from:
https://stackoverflow.com/questions/66052628/how-can-i-split-the-train-test-valid-data-from-datasets-and-store-it-in-pickle 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…