ztlearn.utils package

Submodules

ztlearn.utils.conv_utils module

ztlearn.utils.conv_utils.alt_get_output_dims(input_height, input_width, kernel_size, strides, pad_height, pad_width)[source]

FORMULA: [((W - Kernel_W + 2P) / S_W) + 1] and [((H - Kernel_H + 2P) / S_H) + 1] FORMULA: [((W - Pool_W + 2P) / S_W) + 1] and [((H - Pool_H + 2P) / S_H) + 1]

ztlearn.utils.conv_utils.get_output_dims(input_height, input_width, kernel_size, strides, padding_type='valid')[source]

SAME and VALID Padding

VALID: No padding is applied. Assume that all dimensions are valid so that input image
gets fully covered by filter and stride you specified.
SAME: Padding is applied to input (if needed) so that input image gets fully covered by
filter and stride you specified. For stride 1, this will ensure that output image size is same as input.

References

[1] SAME and VALID Padding: http://bit.ly/2MtGgBM

ztlearn.utils.conv_utils.unroll_inputs(padded_inputs, batch_num, filter_num, output_height, output_width, kernel_size)[source]

ztlearn.utils.data_utils module

ztlearn.utils.data_utils.accuracy_score(predictions, targets)[source]

compute an average accuracy score of prediction vs targets

ztlearn.utils.data_utils.clip_gradients(grad, g_min=-1.0, g_max=1.0)[source]

enforce min and max bounderies on a given gradient

ztlearn.utils.data_utils.computebar(total, curr, size=45, sign='#', prefix='Computing')[source]

generate a graphical loading bar [####—] for a given iteration

ztlearn.utils.data_utils.custom_tuple(tup)[source]

customize tuple to have comma separated numbers

ztlearn.utils.data_utils.eucledian_norm(vec_a, vec_b)[source]

compute the eucledian distance between two vectors

ztlearn.utils.data_utils.extract_files(path, filepath)[source]

extract files from a detected compressed format

ztlearn.utils.data_utils.maybe_download(path, url, print_log=False)[source]

download the data from url, or return existing

ztlearn.utils.data_utils.min_max(input_data, axis=None)[source]

compute the min max standardization for a given input matrix and axis

ztlearn.utils.data_utils.minibatches(input_data, input_label, batch_size, shuffle)[source]

generate minibatches on a given input data matrix

ztlearn.utils.data_utils.normalize(input_data, axis=-1, order=2)[source]

compute normalization (order) for a given input matrix, order and axis

ztlearn.utils.data_utils.one_hot(labels, num_classes=None)[source]

generate one hot encoding for a given set labels

ztlearn.utils.data_utils.polynomial_features(inputs, degree=2, repeated_elems=False, with_bias=True)[source]

generate feature matrix of all polynomial combinations for degrees upto <= degree

ztlearn.utils.data_utils.print_pad(pad_count, pad_char='\n')[source]

pad strings with a total of n = pad_count, pad_char type characters

ztlearn.utils.data_utils.print_results(predictions, test_labels, num_samples=20)[source]

print model targeted vs predicted results

ztlearn.utils.data_utils.print_seq_results(predicted, test_label, test_data, unhot_axis=1, interval=5)[source]

print results for a model predicting a sequence

ztlearn.utils.data_utils.print_seq_samples(train_data, train_label, unhot_axis=1, sample_num=0)[source]

print generated sequence samples

ztlearn.utils.data_utils.range_normalize(input_data, a=-1, b=1, axis=None)[source]

compute the range normalization for a given input matrix, range [a,b] and axis

ztlearn.utils.data_utils.shuffle_data(input_data, input_label, random_seed=None)[source]

perfom randomized shuffle on a given input dataset

ztlearn.utils.data_utils.train_test_split(samples, labels, test_size=0.2, shuffle=True, random_seed=None, cut_off=None)[source]

generate a train vs test split given a test size

ztlearn.utils.data_utils.unhot(one_hot, unhot_axis=1)[source]

reverse one hot encoded data back to labels

ztlearn.utils.data_utils.z_score(input_data, axis=None)[source]

compute the z score for a given input matrix and axis

ztlearn.utils.im2col_utils module

ztlearn.utils.im2col_utils.col2im_indices(cols, x_shape, field_height=3, field_width=3, padding=((0, 0), (0, 0)), stride=1)[source]

An implementation of col2im based on fancy indexing and np.add.at

ztlearn.utils.im2col_utils.get_im2col_indices(x_shape, field_height=3, field_width=3, padding=((0, 0), (0, 0)), stride=1)[source]
ztlearn.utils.im2col_utils.get_pad(padding, input_height, input_width, stride_height, stride_width, kernel_height, kernel_width)[source]
ztlearn.utils.im2col_utils.im2col_indices(x, field_height, field_width, padding, stride=1)[source]

An implementation of im2col based on some fancy indexing

ztlearn.utils.plot_utils module

ztlearn.utils.plot_utils.plot_generated_img_samples(test_label, predictions, fig_dims=(6, 6), dataset='digits', channels=1, to_save=False, iteration=0, model_name='')[source]
ztlearn.utils.plot_utils.plot_img_results(test_data, test_label, predictions, fig_dims=(6, 6), dataset='digits', channels=1)[source]
ztlearn.utils.plot_utils.plot_img_samples(train_data, train_target=None, fig_dims=(6, 6), dataset='digits', channels=1)[source]
ztlearn.utils.plot_utils.plot_kmeans(data, labels=None, centroids=None, model_name='K-Means', model_clusters=1, to_save=False, fig_dims=(8, 6), title_dict={'size': 10})[source]
ztlearn.utils.plot_utils.plot_metric(metric, epoch, train, valid, model_name='', to_save=False, plot_dict={'linewidth': 0.8}, fig_dims=(8, 6), title_dict={'size': 10}, ylabel_dict={'size': 10}, xlabel_dict={'size': 10}, legend=['train', 'valid'], legend_dict={'loc': 'upper right'})[source]
ztlearn.utils.plot_utils.plot_opt_viz(dims, x, y, z, f_solution, overlay='plot', to_save=False, title='Optimization', title_dict={'size': 14}, fig_dims=(8, 6), xticks_dict={'size': 14}, yticks_dict={'size': 14}, xlabel='$\\theta^1$', xlabel_dict={'size': 14}, ylabel='$\\theta^2$', ylabel_dict={'size': 14}, legend=['train', 'valid'], legend_dict={})[source]
ztlearn.utils.plot_utils.plot_pca(components, n_components=2, colour_array=None, model_name='PCA', to_save=False, fig_dims=(8, 6), title_dict={'size': 10})[source]
ztlearn.utils.plot_utils.plot_regression_results(train_data, train_label, test_data, test_label, input_data, pred_line, mse, super_title, y_label, x_label, model_name='', to_save=False, fig_dims=(8, 6), font_size=10)[source]
ztlearn.utils.plot_utils.plot_tiled_img_samples(train_data, train_target=None, fig_dims=(6, 6), dataset='digits', channels=1)[source]
ztlearn.utils.plot_utils.plotter(x, y=[], plot_dict={}, fig_dims=(7, 5), title='Model', title_dict={}, ylabel='y-axis', ylabel_dict={}, xlabel='x-axis', xlabel_dict={}, legend=[], legend_dict={}, file_path='', to_save=False, plot_type='line', cmap_name=None, cmap_number=10, grid_on=True)[source]

ztlearn.utils.sequence_utils module

ztlearn.utils.sequence_utils.gen_mult_sequence_xtym(nums, cols=10, factor=10, tensor_dtype=<class 'int'>)[source]
ztlearn.utils.sequence_utils.gen_mult_sequence_xtyt(nums, cols=10, factor=10, tensor_dtype=<class 'int'>)[source]

ztlearn.utils.text_utils module

ztlearn.utils.text_utils.gen_char_sequence_xtym(text, maxlen, step, tensor_dtype=<class 'int'>)[source]
ztlearn.utils.text_utils.gen_char_sequence_xtyt(text, maxlen, step, tensor_dtype=<class 'int'>)[source]
ztlearn.utils.text_utils.get_sentence_tokens(text_list, maxlen=None, dtype='int32')[source]
ztlearn.utils.text_utils.longest_sentence(sentences)[source]

find longest sentence in a list of sentences

ztlearn.utils.text_utils.pad_sequence(sequence, maxlen=None, dtype='int32', padding='pre', truncating='pre', value=0.0)[source]

pad or truncate sequences depending on size of maxlen - longest sequence

ztlearn.utils.time_deco_utils module

class ztlearn.utils.time_deco_utils.LogIfBusy(func)[source]

Bases: object

Module contents

ztlearn.utils.gen_mult_sequence_xtyt(nums, cols=10, factor=10, tensor_dtype=<class 'int'>)[source]
ztlearn.utils.gen_mult_sequence_xtym(nums, cols=10, factor=10, tensor_dtype=<class 'int'>)[source]
ztlearn.utils.plot_metric(metric, epoch, train, valid, model_name='', to_save=False, plot_dict={'linewidth': 0.8}, fig_dims=(8, 6), title_dict={'size': 10}, ylabel_dict={'size': 10}, xlabel_dict={'size': 10}, legend=['train', 'valid'], legend_dict={'loc': 'upper right'})[source]
ztlearn.utils.plot_kmeans(data, labels=None, centroids=None, model_name='K-Means', model_clusters=1, to_save=False, fig_dims=(8, 6), title_dict={'size': 10})[source]
ztlearn.utils.plot_pca(components, n_components=2, colour_array=None, model_name='PCA', to_save=False, fig_dims=(8, 6), title_dict={'size': 10})[source]
ztlearn.utils.plot_regression_results(train_data, train_label, test_data, test_label, input_data, pred_line, mse, super_title, y_label, x_label, model_name='', to_save=False, fig_dims=(8, 6), font_size=10)[source]
ztlearn.utils.plot_img_samples(train_data, train_target=None, fig_dims=(6, 6), dataset='digits', channels=1)[source]
ztlearn.utils.plot_img_results(test_data, test_label, predictions, fig_dims=(6, 6), dataset='digits', channels=1)[source]
ztlearn.utils.plot_generated_img_samples(test_label, predictions, fig_dims=(6, 6), dataset='digits', channels=1, to_save=False, iteration=0, model_name='')[source]
ztlearn.utils.plot_tiled_img_samples(train_data, train_target=None, fig_dims=(6, 6), dataset='digits', channels=1)[source]
ztlearn.utils.unhot(one_hot, unhot_axis=1)[source]

reverse one hot encoded data back to labels

ztlearn.utils.one_hot(labels, num_classes=None)[source]

generate one hot encoding for a given set labels

ztlearn.utils.min_max(input_data, axis=None)[source]

compute the min max standardization for a given input matrix and axis

ztlearn.utils.z_score(input_data, axis=None)[source]

compute the z score for a given input matrix and axis

ztlearn.utils.normalize(input_data, axis=-1, order=2)[source]

compute normalization (order) for a given input matrix, order and axis

ztlearn.utils.print_pad(pad_count, pad_char='\n')[source]

pad strings with a total of n = pad_count, pad_char type characters

ztlearn.utils.custom_tuple(tup)[source]

customize tuple to have comma separated numbers

ztlearn.utils.minibatches(input_data, input_label, batch_size, shuffle)[source]

generate minibatches on a given input data matrix

ztlearn.utils.shuffle_data(input_data, input_label, random_seed=None)[source]

perfom randomized shuffle on a given input dataset

ztlearn.utils.computebar(total, curr, size=45, sign='#', prefix='Computing')[source]

generate a graphical loading bar [####—] for a given iteration

ztlearn.utils.clip_gradients(grad, g_min=-1.0, g_max=1.0)[source]

enforce min and max bounderies on a given gradient

ztlearn.utils.range_normalize(input_data, a=-1, b=1, axis=None)[source]

compute the range normalization for a given input matrix, range [a,b] and axis

ztlearn.utils.accuracy_score(predictions, targets)[source]

compute an average accuracy score of prediction vs targets

ztlearn.utils.train_test_split(samples, labels, test_size=0.2, shuffle=True, random_seed=None, cut_off=None)[source]

generate a train vs test split given a test size

ztlearn.utils.print_seq_samples(train_data, train_label, unhot_axis=1, sample_num=0)[source]

print generated sequence samples

ztlearn.utils.print_seq_results(predicted, test_label, test_data, unhot_axis=1, interval=5)[source]

print results for a model predicting a sequence

ztlearn.utils.print_results(predictions, test_labels, num_samples=20)[source]

print model targeted vs predicted results