You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We've tried very hard to make Distributed-FIJI light and adaptable, but keeping the configuration settings to a manageable number requires making some default assumptions.
4
+
Below is a non-comprehensive list of places where you can adapt the code to your own purposes.
5
+
6
+
## Changes you can make to Distributed-FIJI outside of the Docker container
7
+
8
+
***Location of ECS configuration files:** By default these are placed into your bucket with a prefix of 'ecsconfigs/'.
9
+
Alternate locations can be designated in the run script.
10
+
***Log configuration and location of exported logs:** Distributed-FIJI creates log groups with a default retention of 60 days (to avoid hitting the AWS limit of 250) and after finishing the run exports them into your bucket with a prefix of 'exportedlogs/LOG_GROUP_NAME/'.
11
+
These may be modified in the run script.
12
+
***Advanced EC2 configuration:** Any additional configuration of your EC2 spot fleet (such as installing additional packages or running scripts on startup) can be done by modifying the userData parameter in the run script.
13
+
***SQS queue detailed configuration:** Distributed-FIJI creates a queue where messages will be tried 10 times before being consigned to a DeadLetterQueue, and unprocessed messages will expire after 14 days (the AWS maximum).
14
+
These values can be modified in run.py.
15
+
16
+
## Changes that will require you to make your own Docker container
17
+
18
+
***Fiji version:** We ship the most recent fiji-open-jdk-8, but in case you want to use your own Dockerized version of a different Fiji build you can edit the Dockerfile to call that Fiji Docker instead.
19
+
***Alarm names or thresholds:** These can be modified in the run-worker script.
20
+
***Frequency or types of information included in the per-instance logs:** These can be adjusted in the instance-monitor script.
21
+
***Log stream names or logging level:** These can be modified in the fiji-worker.py script.
This is an example of one possible instance configuration using [Distributed-CellProfiler](http://github.com/cellprofiler/distributed-cellprofiler) as an example.
62
-
This is one m4.16xlarge EC2 instance (64 CPUs, 250GB of RAM) with a 165 EBS volume mounted on it. A spot fleet could contain many such instances.
61
+
This is an example of one possible instance configuration of Distributed-FIJI.
62
+
This is one m4.16xlarge EC2 instance (64 CPUs, 250GB of RAM) with a 165 EBS volume mounted on it.
63
+
A spot fleet could contain many such instances.
63
64
It has 16 tasks (individual Docker containers).
64
65
Each Docker container uses 10GB of hard disk space and is assigned 4 CPUs and 15 GB of RAM (which it does not share with other Docker containers).
65
-
Each container shares its individual resources among 4 copies of CellProfiler.
66
-
Each copy of CellProfiler runs a pipeline on one "job", which can be anything from a single image to an entire 384 well plate or timelapse movie.
67
-
You can optionally stagger the start time of these 4 copies of CellProfiler, ensuring that the most memory- or disk-intensive steps aren't happening simultaneously, decreasing the likelihood of a crash.
66
+
Each copy of Fiji runs a pipeline on one "job", which can be anything from a single image to an entire 384 well plate or timelapse movie.
68
67
69
68
Read more about this and other configurations in [Step 1: Configuration](step_1_configuration.md).
Copy file name to clipboardExpand all lines: documentation/DF-documentation/step_1_configuration.md
+35-9Lines changed: 35 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,12 +9,11 @@ Once the config file is created, simply type `python run.py setup` to set up you
9
9
10
10
***APP_NAME:** This will be used to tie your clusters, tasks, services, logs, and alarms together.
11
11
It need not be unique, but it should be descriptive enough that you can tell jobs apart if you're running multiple jobs.
12
-
***LOG_GROUP_NAME:** The name to give the log group that will monitor the progress of your jobs and allow you to check performance or look for problems after the fact.
13
12
14
13
***
15
14
### DOCKER REGISTRY INFORMATION
16
15
17
-
***DOCKERHUB_TAG:** This is the encapsulated version of BioFormats2Raw you will be running.
16
+
***DOCKERHUB_TAG:** This is the encapsulated version of FIJI you will be running.
18
17
19
18
***
20
19
@@ -37,12 +36,16 @@ If your jobs complete quickly and/or you don't need the data immediately you can
37
36
***EBS_VOL_SIZE:** The size of the temporary hard drive associated with each EC2 instance in GB.
38
37
The minimum allowed is 22.
39
38
If you have multiple Dockers running per machine, each Docker will have access to (EBS_VOL_SIZE/TASKS_PER_MACHINE) - 2 GB of space.
39
+
***DOWNLOAD_FILES:** Whether or not to download the image files to the EBS volume before processing.
40
+
This completely bypasses mounting the source bucket with S3FS.
41
+
This typically requires a larger EBS volume (depending on the size of your image sets, and how many sets are processed per group).
42
+
It avoids occasional issues with S3FS that can crop up on longer runs and permissions issues with mounting a source bucket.
40
43
41
44
***
42
45
43
46
### DOCKER INSTANCE RUNNING ENVIRONMENT
44
-
***CPU_SHARES:** How many CPUs each Docker container may have. (1024 units = 1 core)
45
47
***MEMORY:** How much memory each Docker container may have.
48
+
***SCRIPT_DOWNLOAD_URL:** Where to download the FIJI script you will be running.
46
49
47
50
***
48
51
@@ -58,11 +61,34 @@ See [Step 0: Prep](step_0_prep.med) for more information.
58
61
59
62
***
60
63
64
+
### LOG GROUP INFORMATION
65
+
66
+
***LOG_GROUP_NAME:** The name to give the log group that will monitor the progress of your jobs and allow you to check performance or look for problems after the fact.
67
+
61
68
### REDUNDANCY CHECKS
62
69
63
-
***CHECK_IF_DONE_BOOL:** Whether or not to check the output folder before proceeding.
64
-
Case-insensitive.
65
-
If an analysis fails partway through (due to some of the files being in the wrong place, an AWS outage, a machine crash, etc.), setting this to 'True' this allows you to resubmit the whole analysis but only reprocess jobs that haven't already been done.
66
-
This saves you from having to try to parse exactly which jobs succeeded versus failed or from having to pay to rerun the entire analysis.
67
-
If Distributed-FIJI determines the correct number of files are already in the output folder it will designate that job as completed and move onto the next one.
68
-
If you actually do want to overwrite files that were previously generated (such as when you have improved a pipeline and no longer want the output of the old version), set this to 'False' to process jobs whether or not there are already files in the output folder.
70
+
***EXPECTED_NUMBER_FILES:** How many files need to be in the output folder in order to mark a job as completed.
71
+
***MIN_FILE_SIZE_BYTES:** What is the minimal number of bytes an object should be to "count"?
72
+
Useful when trying to detect jobs that may have exported smaller corrupted files vs larger, full-size files.
73
+
***NECESSARY_STRING:** This allows you to optionally set a string that must be included in your file to count towards the total in EXPECTED_NUMBER_FILES.
74
+
This can be helpful if your pipeline puts out a mixture of file types and you want to count only how many images were produced, for example.
75
+
76
+
### EXAMPLE CONFIGURATIONS
77
+
78
+
This is an example of one possible configuration. It's a fairly large machine that is able to process 16 jobs at the same time.
This is an example of another possible configuration. When we run Distributed Fiji we tend to prefer running a larger number of smaller machine. This is an example of a configuration we often use. We might use a spot fleet of 100 of these machines (CLUSTER_MACHINES = 100).
0 commit comments