Skip to content

Commit 0c7399c

Browse files
committed
Added Data Spec README
1 parent 7057ba8 commit 0c7399c

1 file changed

Lines changed: 54 additions & 26 deletions

File tree

config/DBN/README.md

Lines changed: 54 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -1,29 +1,27 @@
1+
Python-DNN Config
2+
===============
13
Model Config
24
------------
3-
* `nnetType` : (Mandatory) specify Type of Network (CNN/RBM/SDA/DNN)
4-
* `train_data` : (Mandatory) specify the working directory containing data configuration and output
5+
* `nnetType` : (Mandatory) Type of Network (CNN/RBM/SDA/DNN)
6+
* `train_data` : (Mandatory) The working directory containing data configuration and output
57
* `wdir` : (Mandatory) Working Directory.
6-
* `data_spec` : (Mandatory) specify the path of the data sepification relative to `model_config.json`
7-
* `nnet_spec` : (Mandatory) specify the path of network configuration specification relative to `model_config.json`
8+
* `data_spec` : (Mandatory) The path of the data sepification relative to `model_config.json`
9+
* `nnet_spec` : (Mandatory) The path of network configuration specification relative to `model_config.json`
810

9-
* `output_file` : (Mandatory) specify the path of RBM network output file relative to working directory
10-
* `input_file` : specify the path of RBM network inpu file relative to working directory
11+
* `output_file` : (Mandatory) The path of RBM network output file relative to `wdir`
12+
* `input_file` : The path of PreTrained/FineTuned network input file relative to `wdir`.(Mandatory for DNN)
1113

12-
* `batch_size` : specify the mini batch size while training, default 128
13-
14-
* `n_ins` :784
15-
* `n_outs` :10
14+
* `logger_level` : Level of Logger.Valid Values are "INFO","DEBUG" and "ERROR"
1615

17-
* `gbrbm_learning_rate` : pretraining
18-
* `pretraining_learning_rate` : pretraining
19-
* `pretraining_epochs` :
20-
21-
* `initial_pretrain_momentum` :Specify the momentum factor while training default 0.5
22-
* `final_pretrain_momentum` :Specify the momentum factor while training default 0.9
23-
* `initial_pretrain_momentum_epoch` : Specify the momentum factor while training default 5
16+
* `batch_size` : specify the mini batch size while training, default 128
17+
* `n_ins` :Dimension of input (Mandatory for all except CNN)
18+
* `n_outs` :(Mandatory) Dimension of output (No: of Classes)
19+
* `input_shape`: The input shape of a given feature vector.(Mandatory For CNN).Should be an Array.
2420

25-
* `finetune_method` : Two methods are supported C: Constant learning rate and E: Exponential decay
26-
* `finetune_rate` : learning rate configuration
21+
* `finetune_method` : Two methods are supported
22+
>> 1. C: Constant learning rate.
23+
>> 2. E: Exponential decay.
24+
* `finetune_rate` : Configuration of learning method.Contains a json object with following params
2725

2826
> param | description | default value |learning method
2927
> :-----|:------------|:--------------:|:---------------:
@@ -36,14 +34,44 @@ Model Config
3634
> `min_epoch_decay_start` || 15 | E
3735
> `init_error` || 100 | E
3836
39-
* `finetune_momentum` : Specify the momentum factor while finetuning
37+
* `finetune_momentum` : The momentum factor while finetuning
38+
* `export_path` : path (realative to `wdir`) for writting (bottleneck) features.
39+
* `processes` : Process should be run by program.Contains a json object with following params
40+
41+
> * `pretraining` : whether Pre-Training is needed.(invalid for DNN and CNN).(Default value = false)
42+
> * `finetuning` : whether Fine Tuning is needed.(Default value = false)
43+
> * `testing` : whether Fine Tuning is needed.(Default value = false)
44+
> * `export_data` : whether extracted features should written to file.If true,`export_path` is required.(Default value = false).
45+
46+
######Specific to DBN(RBM)######
47+
* `gbrbm_learning_rate` : Pretraining learning rate for gbrbm layer.(Default Value = 0.005)
48+
* `pretraining_learning_rate` : Pretraining learning rate for all layers except gbrbm layer.(Default Value = 0.08)
49+
* `pretraining_epochs` :No of Pretraining epochs(Default Value = 10)
50+
* `initial_pretrain_momentum` :The initial momentum factor while pre-training (Default Value = 0.5)
51+
* `final_pretrain_momentum` :The final momentum factor while pre-training (Default Value = 0.9)
52+
* `initial_pretrain_momentum_epoch` : No: of epochs with the initial momentum factor before switching to final momentum factor.(Default Value = 5)
53+
54+
* `keep_layer_num`: From which layer Pre-Trainig Should Start.(Default Value = 0).If non-Zero layer is intilaized with weights from `input_file`
4055

41-
* `processes` :
56+
######Specific to SDA######
57+
* `pretrain_lr` :learning rate to be used during pre-training (Default Value = 0.08).
4258

43-
>> * `pretraining` : default:false
44-
>> * `finetuning` : default:false
45-
>> * `testing` : default:false
46-
>> * `export_data` : default:false
59+
_____________________________________________________________________________________________________________
60+
Data Specification
61+
------------------
62+
Data Specfication has 3 fields:
63+
> 1. `training`
64+
> 2. `validation`
65+
> 3. `testing`
4766
48-
* `export_path` : path (realative to wdir) for writting (bottleneck) features.
67+
Each one is a json object with following fields:
68+
* `base_path` :(Mandatory) Base path of data.
69+
* `filename` :(Mandatory) Filename,
70+
* `partition` :(Mandatory) Size of data which should be loaded to memory at a time (in MiB)
71+
* `random` : Whether to use random order (Default value = true)
72+
* `random_seed` : Seed for random numbers if `random` is `true`
73+
* `keep_flatten` : Whether to use data as flatten vector or reshape(Default Value = false)
74+
* `reader_type` : (Mandatory) Type of reader NP/T1/T2.
75+
* `dim_shuffle` : how to use reshape given fatten vector.Used only `keep_flatten` is `false`
4976

77+
________________________________________________________________________________________________________________

0 commit comments

Comments
 (0)