Ensembl data load: Difference between revisions
No edit summary |
No edit summary |
||
Line 2: | Line 2: | ||
The first step is always pulling in the sequences themselves ("dummy" step). The next steps will process these sequences. "Rules" are added at the end to say which step is based on which other step. This whole system is called "ensembl-pipeline". There is a new system written by the ensembl-compara people called "ensembl-hive" which we are not covering here. The hive is more straightforward to use in some respect, see the CVS module ensembl-hive and its doc-directory. | The first step is always pulling in the sequences themselves ("dummy" step). The next steps will process these sequences. "Rules" are added at the end to say which step is based on which other step. This whole system is called "ensembl-pipeline". There is a new system written by the ensembl-compara people called "ensembl-hive" which we are not covering here. The hive is more straightforward to use in some respect, see the CVS module ensembl-hive and its doc-directory. | ||
You need to download / extract the example files from [[Image:EnsemblWorkshopFiles.tar.gz]] for the following steps. | |||
== Load Repeatmasker file == | == Load Repeatmasker file == | ||
Line 76: | Line 78: | ||
| contig:NCBIM37:AC153919.8:1:264561:1 | SubmitContig | | | contig:NCBIM37:AC153919.8:1:264561:1 | SubmitContig | | ||
| contig:NCBIM37:AL589742.21:1:125641:1 | SubmitContig | | | contig:NCBIM37:AL589742.21:1:125641:1 | SubmitContig | | ||
== Run pipeline == | |||
* Have a loot at the central pipeline config file: | |||
$HOME/workshop/genebuild/configs/pipeline_config/modules/Bio/EnsEMBL/Pipeline/Config/BatchQueue.pm | |||
* Set environment parameters for config file of pipeline system | |||
export PERL5LIB=$HOME/workshop/genebuild/configs/pipeline_config/modules:${PERL5LIB} | |||
* Test the pipeline | |||
perl $HOME/cvs_checkout/ensembl-analysis/scripts/test_RunnableDB $DBSPEC\ | |||
-logic_name RepeatMask \ | |||
-input_id contig::AC087062.25:1:224451:1 \ | |||
-dbpass workshop –verbose | |||
* Run the pipeline | |||
perl $HOME/cvs_checkout/ensembl-pipeline/scripts/rulemanager.pl $DBSPEC\ | |||
-logic_name RepeatMask \ | |||
* You can use the script "monitor $DBSPEC -current -finishedp" to check how much is already done. | |||
* dump out the genome again to see how the repeat masked sequences are now in lowercase: | |||
perl $HOME/cvs_checkout/ensembl-analysis/scripts/sequence_dump.pl\ | |||
-dbhost 127.0.0.1 -dbuser ens-training -dbport 3306\ | |||
-dbname mouse37_mini_ref -mask -softmask -mask_repeat RepeatMask\ | |||
-dbpass workshop -coord_system_name chromosome \ | |||
-output_dir $HOME/workshop/genebuild/output/softmasked_seq |
Revision as of 10:49, 14 September 2010
The Ensembl system integrates a job scheduler with the data loaders. You schedule a repeatmasker run with all parameters in mysql tables and find the results of this run later in your mysql databases. The complete job description is called an "analysis". It consists of individual steps.
The first step is always pulling in the sequences themselves ("dummy" step). The next steps will process these sequences. "Rules" are added at the end to say which step is based on which other step. This whole system is called "ensembl-pipeline". There is a new system written by the ensembl-compara people called "ensembl-hive" which we are not covering here. The hive is more straightforward to use in some respect, see the CVS module ensembl-hive and its doc-directory.
You need to download / extract the example files from File:EnsemblWorkshopFiles.tar.gz for the following steps.
Load Repeatmasker file
- The make things easier, let's set a little shortcut:
export DBSPEC="-dbhost 127.0.0.1 -dbuser ens-training -dbport 3306 -dbname mouse37_mini_ref -dbpass workshop"
- Analysis Step 1: Create a "dummy analysis file" which will simply select the sequences to analyse (here: contigs), e.g. create a file submit_ana.conf:
[SubmitContig] module=Dummy input_id_type=CONTIG
- Load the "dummy analysis" into the database
$HOME/cvs_checkout/ensembl-pipeline/scripts/analysis_setup.pl $DBSPEC -read -file repeatmask_ana.conf
- Analysis Step 2: Define the real analysis, e.g. repeatmask_ana.conf
[RepeatMask] db=repbase db_version=0129 db_file=repbase program=RepeatMask program_version=3.1.8 program_file=/path/to/repmasker/RepeatMask parameters=-nolow -species mouse -s module=RepeatMask gff_source=RepeatMask gff_feature=repeat input_id_type=CONTIG
- load the analysis into the mysql database
$HOME/cvs_checkout/ensembl-pipeline/scripts/analysis_setup.pl $DBSPEC -read -file repeatmask_ana.conf
- see what happened:
SELECT * from analysis\G *************************** 1. row *************************** analysis_id: 1 created: 2010-09-13 16:50:16 logic_name: SubmitContig db: NULL db_version: NULL db_file: NULL program: NULL program_version: NULL program_file: NULL parameters: NULL module: Dummy module_version: NULL gff_source: NULL gff_feature: NULL *************************** 2. row *************************** analysis_id: 2 created: 2010-09-13 16:14:11 logic_name: RepeatMask db: repbase db_version: 0129 db_file: repbase program: RepeatMask program_version: 3.1.8 program_file: /path/to/repmasker/RepeatMask parameters: -nolow -species mouse -s module: RepeatMask module_version: NULL gff_source: RepeatMask gff_feature: repeat
- Add a rule which says that RepeatMask requires the contig sequences:
perl $HOME/cvs_checkout/ensembl-pipeline/scripts/RuleHandler.pl $DBSPEC \ -insert -goal RepeatMask \ -condition SubmitContig
- Add the "input_ids" step which adds the sequences to the job description:
perl $HOME/cvs_checkout/ensembl-pipeline/scripts/make_input_ids $DBSPEC -logic_name SubmitContig -coord_system contig -slice 150k
- check what has changed:
select ia.input_id,a.logic_name from input_id_analysis ia, analysis a where ia.analysis_id = a.analysis_id ; +---------------------------------------+--------------+ | input_id | logic_name | +---------------------------------------+--------------+ | contig:NCBIM37:AC087062.25:1:224451:1 | SubmitContig | | contig:NCBIM37:AC138620.4:1:209846:1 | SubmitContig | | contig:NCBIM37:AC153919.8:1:264561:1 | SubmitContig | | contig:NCBIM37:AL589742.21:1:125641:1 | SubmitContig |
Run pipeline
- Have a loot at the central pipeline config file:
$HOME/workshop/genebuild/configs/pipeline_config/modules/Bio/EnsEMBL/Pipeline/Config/BatchQueue.pm
- Set environment parameters for config file of pipeline system
export PERL5LIB=$HOME/workshop/genebuild/configs/pipeline_config/modules:${PERL5LIB}
- Test the pipeline
perl $HOME/cvs_checkout/ensembl-analysis/scripts/test_RunnableDB $DBSPEC\ -logic_name RepeatMask \ -input_id contig::AC087062.25:1:224451:1 \ -dbpass workshop –verbose
- Run the pipeline
perl $HOME/cvs_checkout/ensembl-pipeline/scripts/rulemanager.pl $DBSPEC\ -logic_name RepeatMask \
- You can use the script "monitor $DBSPEC -current -finishedp" to check how much is already done.
- dump out the genome again to see how the repeat masked sequences are now in lowercase:
perl $HOME/cvs_checkout/ensembl-analysis/scripts/sequence_dump.pl\ -dbhost 127.0.0.1 -dbuser ens-training -dbport 3306\ -dbname mouse37_mini_ref -mask -softmask -mask_repeat RepeatMask\ -dbpass workshop -coord_system_name chromosome \ -output_dir $HOME/workshop/genebuild/output/softmasked_seq