From the Genomic Data Commons (GDC) website:
The National Cancer Institute’s (NCI’s) Genomic Data Commons (GDC) is a data sharing platform that promotes precision medicine in oncology. It is not just a database or a tool; it is an expandable knowledge network supporting the import and standardization of genomic and clinical data from cancer research programs. The GDC contains NCI-generated data from some of the largest and most comprehensive cancer genomic datasets, including The Cancer Genome Atlas (TCGA) and Therapeutically Applicable Research to Generate Effective Therapies (TARGET). For the first time, these datasets have been harmonized using a common set of bioinformatics pipelines, so that the data can be directly compared. As a growing knowledge system for cancer, the GDC also enables researchers to submit data, and harmonizes these data for import into the GDC. As more researchers add clinical and genomic data to the GDC, it will become an even more powerful tool for making discoveries about the molecular basis of cancer that may lead to better care for patients.
The data model for the GDC is complex, but it worth a quick overview and a graphical representation is included here.
The GDC API exposes these nodes and edges in a somewhat simplified set of RESTful endpoints.
This quickstart section is just meant to show basic functionality. More details of functionality are included further on in this vignette and in function-specific help.
This software is available at Bioconductor.org and can be downloaded via
BiocManager::install
.
To report bugs or problems, either
submit a new issue
or submit a bug.report(package='GenomicDataCommons')
from within R (which
will redirect you to the new issue on GitHub).
Installation can be achieved via Bioconductor’s BiocManager
package.
if (!require("BiocManager"))
install.packages("BiocManager")
BiocManager::install('GenomicDataCommons')
library(GenomicDataCommons)
The GenomicDataCommons package relies on having network
connectivity. In addition, the NCI GDC API must also be operational
and not under maintenance. Checking status
can be used to check this
connectivity and functionality.
GenomicDataCommons::status()
## $commit
## [1] "8acb9697bd3e944def954f8d1c0b5c22a0fbc8fa"
##
## $data_release
## [1] "Data Release 12.0 - August 23, 2018"
##
## $status
## [1] "OK"
##
## $tag
## [1] "1.15.1"
##
## $version
## [1] 1
And to check the status in code:
stopifnot(GenomicDataCommons::status()$status=="OK")
The following code builds a manifest
that can be used to guide the
download of raw data. Here, filtering finds gene expression files
quantified as raw counts using HTSeq
from ovarian cancer patients.
ge_manifest = files() %>%
filter( ~ cases.project.project_id == 'TCGA-OV' &
type == 'gene_expression' &
analysis.workflow_type == 'HTSeq - Counts') %>%
manifest()
After the 379 gene expression files
specified in the query above. Using multiple processes to do the download very
significantly speeds up the transfer in many cases. On a standard 1Gb
connection, the following completes in about 30 seconds. The first time the
data are downloaded, R will ask to create a cache directory (see ?gdc_cache
for details of setting and interacting with the cache). Resulting
downloaded files will be stored in the cache directory. Future access to
the same files will be directly from the cache, alleviating multiple downloads.
fnames = lapply(ge_manifest$id[1:20],gdcdata)
If the download had included controlled-access data, the download above would
have needed to include a token
. Details are available in
the authentication section below.
The GenomicDataCommons can access the significant clinical, demographic, biospecimen, and annotation information contained in the NCI GDC.
expands = c("diagnoses","annotations",
"demographic","exposures")
clinResults = cases() %>%
GenomicDataCommons::select(NULL) %>%
GenomicDataCommons::expand(expands) %>%
results(size=50)
str(clinResults[[1]],list.len=10)
## List of 50
## $ 1f95a9f1-0327-4c89-a6fe-69809d0d8ebf:'data.frame': 1 obs. of 19 variables:
## ..$ progression_or_recurrence : chr "not reported"
## ..$ classification_of_tumor : chr "primary"
## ..$ last_known_disease_status : chr "not reported"
## ..$ tumor_grade : chr "not reported"
## ..$ tissue_or_organ_of_origin : chr "Breast, NOS"
## ..$ created_datetime : chr "2017-06-16T15:41:54.034178-05:00"
## ..$ updated_datetime : chr "2018-08-07T05:08:25.415044+00:00"
## ..$ primary_diagnosis : chr "Infiltrating duct carcinoma, NOS"
## ..$ submitter_id : chr "AD11218_diagnosis"
## ..$ site_of_resection_or_biopsy : chr "Breast, NOS"
## .. [list output truncated]
## $ 4060f92b-ab42-4233-8609-6e9baa11631d:'data.frame': 1 obs. of 19 variables:
## ..$ progression_or_recurrence : chr "not reported"
## ..$ classification_of_tumor : chr "metastasis"
## ..$ last_known_disease_status : chr "not reported"
## ..$ tumor_grade : chr "not reported"
## ..$ tissue_or_organ_of_origin : chr "Nervous system, NOS"
## ..$ created_datetime : chr "2017-06-19T09:23:29.064224-05:00"
## ..$ updated_datetime : chr "2018-08-07T05:08:25.415044+00:00"
## ..$ primary_diagnosis : chr "Oligodendroglioma, NOS"
## ..$ submitter_id : chr "AD8623_diagnosis"
## ..$ site_of_resection_or_biopsy : chr "Brain, NOS"
## .. [list output truncated]
## $ 90f4b65c-cfd4-4066-a5b9-842885c172e2:'data.frame': 1 obs. of 22 variables:
## ..$ classification_of_tumor : chr "not reported"
## ..$ last_known_disease_status : chr "not reported"
## ..$ updated_datetime : chr "2018-03-09T16:42:15.753583-06:00"
## ..$ primary_diagnosis : chr "C56.9"
## ..$ submitter_id : chr "TCGA-13-0727_diagnosis"
## ..$ tumor_stage : chr "not reported"
## ..$ age_at_diagnosis : int 26185
## ..$ vital_status : chr "dead"
## ..$ morphology : chr "8441/3"
## ..$ days_to_death : num 462
## .. [list output truncated]
## $ a4ec8b0b-0b53-43e6-90b4-e88a8a3cbc10:'data.frame': 1 obs. of 19 variables:
## ..$ progression_or_recurrence : chr "not reported"
## ..$ classification_of_tumor : chr "primary"
## ..$ last_known_disease_status : chr "not reported"
## ..$ tumor_grade : chr "not reported"
## ..$ tissue_or_organ_of_origin : chr "Breast, NOS"
## ..$ created_datetime : chr "2017-06-19T09:29:52.147488-05:00"
## ..$ updated_datetime : chr "2018-08-07T05:08:25.415044+00:00"
## ..$ primary_diagnosis : chr "Infiltrating duct carcinoma, NOS"
## ..$ submitter_id : chr "AD1214_diagnosis"
## ..$ site_of_resection_or_biopsy : chr "Breast, NOS"
## .. [list output truncated]
## $ 6e534dd3-81ac-4575-ab1e-02c21e10916d:'data.frame': 1 obs. of 22 variables:
## ..$ classification_of_tumor : chr "not reported"
## ..$ last_known_disease_status : chr "not reported"
## ..$ updated_datetime : chr "2018-03-08T19:30:04.867636-06:00"
## ..$ primary_diagnosis : chr "C24.0"
## ..$ submitter_id : chr "TCGA-W5-AA2X_diagnosis"
## ..$ tumor_stage : chr "stage ivb"
## ..$ age_at_diagnosis : int 24797
## ..$ vital_status : chr "dead"
## ..$ morphology : chr "8160/3"
## ..$ days_to_death : num 271
## .. [list output truncated]
## $ dfd8aaad-cf31-5260-8882-b471ddbcc8fd:'data.frame': 1 obs. of 17 variables:
## ..$ progression_or_recurrence : chr "not reported"
## ..$ last_known_disease_status : chr "not reported"
## ..$ tumor_grade : chr "Not Reported"
## ..$ tissue_or_organ_of_origin : chr "Not Reported"
## ..$ created_datetime : chr "2017-02-24T20:54:45.808091-06:00"
## ..$ updated_datetime : chr "2018-01-19T13:39:21.801433-06:00"
## ..$ primary_diagnosis : chr "not reported"
## ..$ submitter_id : chr "TARGET-50-PADXJZ_diagnosis"
## ..$ site_of_resection_or_biopsy: chr "C649"
## ..$ tumor_stage : chr "iii"
## .. [list output truncated]
## $ 72636f64-ba2c-512e-b76f-72ff91f7f0f7:'data.frame': 1 obs. of 17 variables:
## ..$ progression_or_recurrence : chr "not reported"
## ..$ last_known_disease_status : chr "not reported"
## ..$ tumor_grade : chr "Not Reported"
## ..$ tissue_or_organ_of_origin : chr "Not Reported"
## ..$ created_datetime : chr "2017-02-24T21:09:16.860616-06:00"
## ..$ updated_datetime : chr "2018-03-08T09:11:04.174187-06:00"
## ..$ primary_diagnosis : chr "not reported"
## ..$ submitter_id : chr "TARGET-20-PARPEK_diagnosis"
## ..$ site_of_resection_or_biopsy: chr "not reported"
## ..$ tumor_stage : chr "not reported"
## .. [list output truncated]
## $ 2fb5e38a-6b65-43ae-945c-9fa8109b2de5:'data.frame': 1 obs. of 19 variables:
## ..$ progression_or_recurrence : chr "not reported"
## ..$ classification_of_tumor : chr "primary"
## ..$ last_known_disease_status : chr "not reported"
## ..$ tumor_grade : chr "not reported"
## ..$ tissue_or_organ_of_origin : chr "Rectum, NOS"
## ..$ created_datetime : chr "2017-06-19T09:11:37.838594-05:00"
## ..$ updated_datetime : chr "2018-08-07T05:08:25.415044+00:00"
## ..$ primary_diagnosis : chr "Squamous cell carcinoma, NOS"
## ..$ submitter_id : chr "AD6701_diagnosis"
## ..$ site_of_resection_or_biopsy : chr "Rectum, NOS"
## .. [list output truncated]
## $ c34c0e02-2285-448f-bf8a-433e4caf5379:'data.frame': 1 obs. of 19 variables:
## ..$ progression_or_recurrence : chr "not reported"
## ..$ classification_of_tumor : chr "primary"
## ..$ last_known_disease_status : chr "not reported"
## ..$ tumor_grade : chr "not reported"
## ..$ tissue_or_organ_of_origin : chr "Lung, NOS"
## ..$ created_datetime : chr "2017-06-16T16:06:02.946257-05:00"
## ..$ updated_datetime : chr "2018-08-07T05:08:25.415044+00:00"
## ..$ primary_diagnosis : chr "Adenocarcinoma, NOS"
## ..$ submitter_id : chr "AD4438_diagnosis"
## ..$ site_of_resection_or_biopsy : chr "Lung, NOS"
## .. [list output truncated]
## $ ee3fa9cd-2d82-4bc1-b71a-87818045dd70:'data.frame': 1 obs. of 19 variables:
## ..$ progression_or_recurrence : chr "not reported"
## ..$ classification_of_tumor : chr "primary"
## ..$ last_known_disease_status : chr "not reported"
## ..$ tumor_grade : chr "not reported"
## ..$ tissue_or_organ_of_origin : chr "Lung, NOS"
## ..$ created_datetime : chr "2017-06-16T15:39:04.464841-05:00"
## ..$ updated_datetime : chr "2018-08-07T05:08:25.415044+00:00"
## ..$ primary_diagnosis : chr "Squamous cell carcinoma, NOS"
## ..$ submitter_id : chr "AD9891_diagnosis"
## ..$ site_of_resection_or_biopsy : chr "Lung, NOS"
## .. [list output truncated]
## [list output truncated]
# or listviewer::jsonedit(clinResults)
This package design is meant to have some similarities to the “hadleyverse” approach of dplyr. Roughly, the functionality for finding and accessing files and metadata can be divided into:
In addition, there are exhiliary functions for asking the GDC API for information about available and default fields, slicing BAM files, and downloading actual data files. Here is an overview of functionality1 See individual function and methods documentation for specific details..
projects()
cases()
files()
annotations()
filter()
facet()
select()
mapping()
available_fields()
default_fields()
grep_fields()
field_picker()
available_values()
available_expand()
results()
count()
response()
gdcdata()
transfer()
gdc_client()
aggregations()
gdc_token()
slicing()
There are two main classes of operations when working with the NCI GDC.
Both classes of operation are reviewed in detail in the following sections.
Vast amounts of metadata about cases (patients, basically), files, projects, and
so-called annotations are available via the NCI GDC API. Typically, one will
want to query metadata to either focus in on a set of files for download or
transfer or to perform so-called aggregations (pivot-tables, facets, similar
to the R table()
functionality).
Querying metadata starts with creating a “blank” query. One
will often then want to filter
the query to limit results prior
to retrieving results. The GenomicDataCommons package has
helper functions for listing fields that are available for
filtering.
In addition to fetching results, the GDC API allows faceting, or aggregating,, useful for compiling reports, generating dashboards, or building user interfaces to GDC data (see GDC web query interface for a non-R-based example).
A query of the GDC starts its life in R. Queries follow the four metadata
endpoints available at the GDC. In particular, there are four convenience
functions that each create GDCQuery
objects (actually, specific subclasses of
GDCQuery
):
projects()
cases()
files()
annotations()
pquery = projects()
The pquery
object is now an object of (S3) class, GDCQuery
(and
gdc_projects
and list
). The object contains the following elements:
projects()
function, the default fields from the GDC are used
(see default_fields()
)filter()
method and will be used to filter results on
retrieval.aggregations()
.Looking at the actual object (get used to using str()
!), note that the query
contains no results.
str(pquery)
## List of 5
## $ fields : chr [1:10] "dbgap_accession_number" "disease_type" "intended_release_date" "name" ...
## $ filters: NULL
## $ facets : NULL
## $ legacy : logi FALSE
## $ expand : NULL
## - attr(*, "class")= chr [1:3] "gdc_projects" "GDCQuery" "list"
[ GDC pagination documentation ]
With a query object available, the next step is to retrieve results from the
GDC. The GenomicDataCommons package. The most basic type of results we can get
is a simple count()
of records available that satisfy the filter criteria.
Note that we have not set any filters, so a count()
here will represent all
the project records publicly available at the GDC in the “default” archive"
pcount = count(pquery)
# or
pcount = pquery %>% count()
pcount
## [1] 40
The results()
method will fetch actual results.
presults = pquery %>% results()
These results are
returned from the GDC in JSON format and
converted into a (potentially nested) list in R. The str()
method is useful
for taking a quick glimpse of the data.
str(presults)
## List of 9
## $ dbgap_accession_number: chr [1:10] "phs000467" NA "phs000466" "phs000465" ...
## $ disease_type :List of 10
## ..$ TARGET-NBL : chr "Neuroblastoma"
## ..$ TCGA-SARC : chr "Sarcoma"
## ..$ TARGET-CCSK: chr "Clear Cell Sarcoma of the Kidney"
## ..$ TARGET-AML : chr "Acute Myeloid Leukemia"
## ..$ TCGA-ACC : chr "Adrenocortical Carcinoma"
## ..$ TCGA-LGG : chr "Brain Lower Grade Glioma"
## ..$ TCGA-THCA : chr "Thyroid Carcinoma"
## ..$ TCGA-MESO : chr "Mesothelioma"
## ..$ TCGA-READ : chr "Rectum Adenocarcinoma"
## ..$ TCGA-KIRC : chr "Kidney Renal Clear Cell Carcinoma"
## $ releasable : logi [1:10] FALSE FALSE FALSE FALSE FALSE FALSE ...
## $ released : logi [1:10] TRUE TRUE TRUE TRUE TRUE TRUE ...
## $ state : chr [1:10] "open" "open" "open" "open" ...
## $ primary_site :List of 10
## ..$ TARGET-NBL : chr "Nervous System"
## ..$ TCGA-SARC : chr "Soft Tissue"
## ..$ TARGET-CCSK: chr "Kidney"
## ..$ TARGET-AML : chr "Blood"
## ..$ TCGA-ACC : chr "Adrenal Gland"
## ..$ TCGA-LGG : chr "Brain"
## ..$ TCGA-THCA : chr "Thyroid"
## ..$ TCGA-MESO : chr "Pleura"
## ..$ TCGA-READ : chr "Colorectal"
## ..$ TCGA-KIRC : chr "Kidney"
## $ project_id : chr [1:10] "TARGET-NBL" "TCGA-SARC" "TARGET-CCSK" "TARGET-AML" ...
## $ id : chr [1:10] "TARGET-NBL" "TCGA-SARC" "TARGET-CCSK" "TARGET-AML" ...
## $ name : chr [1:10] "Neuroblastoma" "Sarcoma" "Clear Cell Sarcoma of the Kidney" "Acute Myeloid Leukemia" ...
## - attr(*, "row.names")= int [1:10] 1 2 3 4 5 6 7 8 9 10
## - attr(*, "class")= chr [1:3] "GDCprojectsResults" "GDCResults" "list"
A default of only 10 records are returned. We can use the size
and from
arguments to results()
to either page through results or to change the number
of results. Finally, there is a convenience method, results_all()
that will
simply fetch all the available results given a query. Note that results_all()
may take a long time and return HUGE result sets if not used carefully. Use of a
combination of count()
and results()
to get a sense of the expected data
size is probably warranted before calling results_all()
length(ids(presults))
## [1] 10
presults = pquery %>% results_all()
length(ids(presults))
## [1] 40
# includes all records
length(ids(presults)) == count(pquery)
## [1] TRUE
Extracting subsets of results or manipulating the results into a more conventional R data structure is not easily generalizable. However, the purrr, rlist, and data.tree packages are all potentially of interest for manipulating complex, nested list structures. For viewing the results in an interactive viewer, consider the listviewer package.
Central to querying and retrieving data from the GDC is the ability to specify
which fields to return, filtering by fields and values, and faceting or
aggregating. The GenomicDataCommons package includes two simple functions,
available_fields()
and default_fields()
. Each can operate on a character(1)
endpoint name (“cases”, “files”, “annotations”, or “projects”) or a GDCQuery
object.
default_fields('files')
## [1] "access" "acl" "created_datetime"
## [4] "data_category" "data_format" "data_type"
## [7] "error_type" "experimental_strategy" "file_autocomplete"
## [10] "file_id" "file_name" "file_size"
## [13] "imaging_date" "magnification" "md5sum"
## [16] "origin" "platform" "read_pair_number"
## [19] "revision" "state" "state_comment"
## [22] "submitter_id" "tags" "type"
## [25] "updated_datetime"
# The number of fields available for files endpoint
length(available_fields('files'))
## [1] 651
# The first few fields available for files endpoint
head(available_fields('files'))
## [1] "access" "acl"
## [3] "analysis.analysis_id" "analysis.analysis_type"
## [5] "analysis.created_datetime" "analysis.input_files.access"
The fields to be returned by a query can be specified following a similar
paradigm to that of the dplyr package. The select()
function is a verb that
resets the fields slot of a GDCQuery
; note that this is not quite analogous to
the dplyr select()
verb that limits from already-present fields. We
completely replace the fields when using select()
on a GDCQuery
.
# Default fields here
qcases = cases()
qcases$fields
## [1] "aliquot_ids" "analyte_ids"
## [3] "case_autocomplete" "case_id"
## [5] "created_datetime" "days_to_index"
## [7] "days_to_lost_to_followup" "disease_type"
## [9] "index_date" "lost_to_followup"
## [11] "portion_ids" "primary_site"
## [13] "sample_ids" "slide_ids"
## [15] "state" "submitter_aliquot_ids"
## [17] "submitter_analyte_ids" "submitter_id"
## [19] "submitter_portion_ids" "submitter_sample_ids"
## [21] "submitter_slide_ids" "updated_datetime"
# set up query to use ALL available fields
# Note that checking of fields is done by select()
qcases = cases() %>% GenomicDataCommons::select(available_fields('cases'))
head(qcases$fields)
## [1] "case_id" "aliquot_ids"
## [3] "analyte_ids" "annotations.annotation_id"
## [5] "annotations.case_id" "annotations.case_submitter_id"
Finding fields of interest is such a common operation that the
GenomicDataCommons includes the grep_fields()
function and the
field_picker()
widget. See the appropriate help pages for details.
The GDC API offers a feature known as aggregation or faceting. By
specifying one or more fields (of appropriate type), the GDC can
return to us a count of the number of records matching each potential
value. This is similar to the R table
method. Multiple fields can be
returned at once, but the GDC API does not have a cross-tabulation
feature; all aggregations are only on one field at a time. Results of
aggregation()
calls come back as a list of data.frames (actually,
tibbles).
# total number of files of a specific type
res = files() %>% facet(c('type','data_type')) %>% aggregations()
res$type
Using aggregations()
is an also easy way to learn the contents of individual
fields and forms the basis for faceted search pages.
[ GDC filtering
documentation ]
The GenomicDataCommons package uses a form of non-standard evaluation to specify R-like queries that are then translated into an R list. That R list is, upon calling a method that fetches results from the GDC API, translated into the appropriate JSON string. The R expression uses the formula interface as suggested by Hadley Wickham in his vignette on non-standard evaluation
It’s best to use a formula because a formula captures both the expression to evaluate and the environment where the evaluation occurs. This is important if the expression is a mixture of variables in a data frame and objects in the local environment [for example].
For the user, these details will not be too important except to note that a filter expression must begin with a “~”.
qfiles = files()
qfiles %>% count() # all files
## [1] 356381
To limit the file type, we can refer back to the section on faceting to see the possible values for the file field “type”. For example, to filter file results to only “gene_expression” files, we simply specify a filter.
qfiles = files() %>% filter(~ type == 'gene_expression')
# here is what the filter looks like after translation
str(get_filter(qfiles))
## List of 2
## $ op : 'scalar' chr "="
## $ content:List of 2
## ..$ field: chr "type"
## ..$ value: chr "gene_expression"
What if we want to create a filter based on the project (‘TCGA-OVCA’, for example)? Well, we have a couple of possible ways to discover available fields. The first is based on base R functionality and some intuition.
grep('pro',available_fields('files'),value=TRUE)
## [1] "cases.diagnoses.progression_free_survival"
## [2] "cases.diagnoses.progression_free_survival_event"
## [3] "cases.diagnoses.progression_or_recurrence"
## [4] "cases.project.dbgap_accession_number"
## [5] "cases.project.disease_type"
## [6] "cases.project.intended_release_date"
## [7] "cases.project.name"
## [8] "cases.project.primary_site"
## [9] "cases.project.program.dbgap_accession_number"
## [10] "cases.project.program.name"
## [11] "cases.project.program.program_id"
## [12] "cases.project.project_id"
## [13] "cases.project.releasable"
## [14] "cases.project.released"
## [15] "cases.project.state"
## [16] "cases.samples.days_to_sample_procurement"
## [17] "cases.samples.method_of_sample_procurement"
## [18] "cases.samples.portions.slides.number_proliferating_cells"
## [19] "cases.tissue_source_site.project"
Interestingly, the project information is “nested” inside the case. We don’t need to know that detail other than to know that we now have a few potential guesses for where our information might be in the files records. We need to know where because we need to construct the appropriate filter.
files() %>% facet('cases.project.project_id') %>% aggregations()
## $cases.project.project_id
## key doc_count
## 1 FM-AD 36134
## 2 TCGA-BRCA 31511
## 3 TCGA-LUAD 17051
## 4 TCGA-UCEC 16130
## 5 TCGA-HNSC 15266
## 6 TCGA-OV 15057
## 7 TCGA-THCA 14420
## 8 TCGA-LUSC 15323
## 9 TCGA-LGG 14723
## 10 TCGA-KIRC 15082
## 11 TCGA-PRAD 14287
## 12 TCGA-COAD 14270
## 13 TCGA-GBM 11973
## 14 TCGA-SKCM 12724
## 15 TCGA-STAD 12845
## 16 TCGA-BLCA 11710
## 17 TCGA-LIHC 10814
## 18 TCGA-CESC 8593
## 19 TCGA-KIRP 8506
## 20 TCGA-SARC 7493
## 21 TCGA-PAAD 5306
## 22 TCGA-ESCA 5270
## 23 TCGA-PCPG 5032
## 24 TCGA-READ 4918
## 25 TCGA-TGCT 4217
## 26 TCGA-THYM 3444
## 27 TCGA-LAML 3960
## 28 TARGET-NBL 2795
## 29 TCGA-ACC 2546
## 30 TCGA-KICH 2324
## 31 TCGA-MESO 2330
## 32 TARGET-AML 2170
## 33 TCGA-UVM 2179
## 34 TCGA-UCS 1658
## 35 TARGET-WT 1406
## 36 TCGA-DLBC 1330
## 37 TCGA-CHOL 1348
## 38 TARGET-OS 47
## 39 TARGET-RT 174
## 40 TARGET-CCSK 15
We note that cases.project.project_id
looks like it is a good fit. We also
note that TCGA-OV
is the correct project_id, not TCGA-OVCA
. Note that
unlike with dplyr and friends, the filter()
method here replaces the
filter and does not build on any previous filters.
qfiles = files() %>%
filter( ~ cases.project.project_id == 'TCGA-OV' & type == 'gene_expression')
str(get_filter(qfiles))
## List of 2
## $ op : 'scalar' chr "and"
## $ content:List of 2
## ..$ :List of 2
## .. ..$ op : 'scalar' chr "="
## .. ..$ content:List of 2
## .. .. ..$ field: chr "cases.project.project_id"
## .. .. ..$ value: chr "TCGA-OV"
## ..$ :List of 2
## .. ..$ op : 'scalar' chr "="
## .. ..$ content:List of 2
## .. .. ..$ field: chr "type"
## .. .. ..$ value: chr "gene_expression"
qfiles %>% count()
## [1] 1137
Asking for a count()
of results given these new filter criteria gives r qfiles %>% count()
results. Generating a manifest for bulk downloads is as
simple as asking for the manifest from the current query.
manifest_df = qfiles %>% manifest()
head(manifest_df)
Note that we might still not be quite there. Looking at filenames, there are
suspiciously named files that might include “FPKM”, “FPKM-UQ”, or “counts”.
Another round of grep
and available_fields
, looking for “type” turned up
that the field “analysis.workflow_type” has the appropriate filter criteria.
qfiles = files() %>% filter( ~ cases.project.project_id == 'TCGA-OV' &
type == 'gene_expression' &
analysis.workflow_type == 'HTSeq - Counts')
manifest_df = qfiles %>% manifest()
nrow(manifest_df)
## [1] 379
The GDC Data Transfer Tool can be used (from R, transfer()
or from the
command-line) to orchestrate high-performance, restartable transfers of all the
files in the manifest. See the bulk downloads section for
details.
[ GDC authentication documentation ]
The GDC offers both “controlled-access” and “open” data. As of this writing, only data stored as files is “controlled-access”; that is, metadata accessible via the GDC is all “open” data and some files are “open” and some are “controlled-access”. Controlled-access data are only available after going through the process of obtaining access.
After controlled-access to one or more datasets has been granted, logging into the GDC web portal will allow you to access a GDC authentication token, which can be downloaded and then used to access available controlled-access data via the GenomicDataCommons package.
The GenomicDataCommons uses authentication tokens only for downloading
data (see transfer
and gdcdata
documentation). The package
includes a helper function, gdc_token
, that looks for the token to
be stored in one of three ways (resolved in this order):
GDC_TOKEN
GDC_TOKEN_FILE
.gdc_token
As a concrete example:
token = gdc_token()
transfer(...,token=token)
# or
transfer(...,token=get_token())
The gdcdata
function takes a character vector of one or more file
ids. A simple way of producing such a vector is to produce a
manifest
data frame and then pass in the first column, which will
contain file ids.
fnames = gdcdata(manifest_df$id[1:2],progress=FALSE)
Note that for controlled-access data, a
GDC authentication token is required. Using the
BiocParallel
package may be useful for downloading in parallel,
particularly for large numbers of smallish files.
The bulk download functionality is only efficient (as of v1.2.0 of the GDC Data Transfer Tool) for relatively large files, so use this approach only when transferring BAM files or larger VCF files, for example. Otherwise, consider using the approach shown above, perhaps in parallel.
fnames = gdcdata(manifest_df$id[3:10], access_method = 'client')
res = cases() %>% facet("project.project_id") %>% aggregations()
head(res)
## $project.project_id
## key doc_count
## 1 FM-AD 18004
## 2 TARGET-NBL 1127
## 3 TCGA-BRCA 1098
## 4 TARGET-AML 988
## 5 TARGET-WT 652
## 6 TCGA-GBM 617
## 7 TCGA-OV 608
## 8 TCGA-LUAD 585
## 9 TCGA-UCEC 560
## 10 TCGA-KIRC 537
## 11 TCGA-HNSC 528
## 12 TCGA-LGG 516
## 13 TCGA-THCA 507
## 14 TCGA-LUSC 504
## 15 TCGA-PRAD 500
## 16 TCGA-SKCM 470
## 17 TCGA-COAD 461
## 18 TCGA-STAD 443
## 19 TCGA-BLCA 412
## 20 TARGET-OS 381
## 21 TCGA-LIHC 377
## 22 TCGA-CESC 307
## 23 TCGA-KIRP 291
## 24 TCGA-SARC 261
## 25 TCGA-LAML 200
## 26 TCGA-ESCA 185
## 27 TCGA-PAAD 185
## 28 TCGA-PCPG 179
## 29 TCGA-READ 172
## 30 TCGA-TGCT 150
## 31 TCGA-THYM 124
## 32 TCGA-KICH 113
## 33 TCGA-ACC 92
## 34 TCGA-MESO 87
## 35 TCGA-UVM 80
## 36 TARGET-RT 75
## 37 TCGA-DLBC 58
## 38 TCGA-UCS 57
## 39 TCGA-CHOL 51
## 40 TARGET-CCSK 13
library(ggplot2)
ggplot(res$project.project_id,aes(x = key, y = doc_count)) +
geom_bar(stat='identity') +
theme(axis.text.x = element_text(angle = 45, hjust = 1))
cases() %>% filter(~ project.program.name=='TARGET') %>% count()
## [1] 3236
cases() %>% filter(~ project.program.name=='TCGA') %>% count()
## [1] 11315
# The need to do the "&" here is a requirement of the
# current version of the GDC API. I have filed a feature
# request to remove this requirement.
resp = cases() %>% filter(~ project.project_id=='TCGA-BRCA' &
project.project_id=='TCGA-BRCA' ) %>%
facet('samples.sample_type') %>% aggregations()
resp$samples.sample_type
# The need to do the "&" here is a requirement of the
# current version of the GDC API. I have filed a feature
# request to remove this requirement.
resp = cases() %>% filter(~ project.project_id=='TCGA-BRCA' &
samples.sample_type=='Solid Tissue Normal') %>%
GenomicDataCommons::select(c(default_fields(cases()),'samples.sample_type')) %>%
response_all()
count(resp)
## [1] 162
res = resp %>% results()
str(res[1],list.len=6)
## List of 1
## $ updated_datetime: chr [1:162] "2018-05-21T16:07:40.645885-05:00" "2018-05-21T16:07:40.645885-05:00" "2018-05-21T16:07:40.645885-05:00" "2018-05-21T16:07:40.645885-05:00" ...
head(ids(resp))
## [1] "bfd49783-1767-469b-9d79-7822301c5efc"
## [2] "a0576d44-9add-429b-9eeb-f7b234b7385f"
## [3] "955d4263-61f7-42e8-8a6e-772a0d6c209d"
## [4] "3d676bba-154b-4d22-ab59-d4d4da051b94"
## [5] "e666081b-caaf-4e8b-9446-807be71201a5"
## [6] "5fe3c51f-65e2-4da6-8924-a6fae17c022d"
res = files() %>% facet('type') %>% aggregations()
res$type
ggplot(res$type,aes(x = key,y = doc_count)) + geom_bar(stat='identity') +
theme(axis.text.x = element_text(angle = 45, hjust = 1))
q = files() %>%
GenomicDataCommons::select(available_fields('files')) %>%
filter(~ cases.project.project_id=='TCGA-GBM' &
data_type=='Gene Expression Quantification')
q %>% facet('analysis.workflow_type') %>% aggregations()
## $analysis.workflow_type
## key doc_count
## 1 HTSeq - Counts 174
## 2 HTSeq - FPKM 174
## 3 HTSeq - FPKM-UQ 174
# so need to add another filter
file_ids = q %>% filter(~ cases.project.project_id=='TCGA-GBM' &
data_type=='Gene Expression Quantification' &
analysis.workflow_type == 'HTSeq - Counts') %>%
GenomicDataCommons::select('file_id') %>%
response_all() %>%
ids()
I need to figure out how to do slicing reproducibly in a testing environment and for vignette building.
q = files() %>%
GenomicDataCommons::select(available_fields('files')) %>%
filter(~ cases.project.project_id == 'TCGA-GBM' &
data_type == 'Aligned Reads' &
experimental_strategy == 'RNA-Seq' &
data_format == 'BAM')
file_ids = q %>% response_all() %>% ids()
bamfile = slicing(file_ids[1],regions="chr12:6534405-6538375",token=gdc_token())
library(GenomicAlignments)
aligns = readGAlignments(bamfile)
Error in curl::curl_fetch_memory(url, handle = handle) :
SSL connect error
openssl
to version
1.0.1 or later.
openssl
, reinstall the R curl
and httr
packages.sessionInfo()
## R version 3.5.1 Patched (2018-07-12 r74967)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 16.04.5 LTS
##
## Matrix products: default
## BLAS: /home/biocbuild/bbs-3.7-bioc/R/lib/libRblas.so
## LAPACK: /home/biocbuild/bbs-3.7-bioc/R/lib/libRlapack.so
##
## locale:
## [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
## [3] LC_TIME=en_US.UTF-8 LC_COLLATE=C
## [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
## [7] LC_PAPER=en_US.UTF-8 LC_NAME=C
## [9] LC_ADDRESS=C LC_TELEPHONE=C
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] ggplot2_3.0.0 GenomicDataCommons_1.4.3
## [3] magrittr_1.5 knitr_1.20
## [5] BiocStyle_2.8.2
##
## loaded via a namespace (and not attached):
## [1] SummarizedExperiment_1.10.1 tidyselect_0.2.4
## [3] xfun_0.3 purrr_0.2.5
## [5] lattice_0.20-35 colorspace_1.3-2
## [7] htmltools_0.3.6 stats4_3.5.1
## [9] yaml_2.2.0 rlang_0.2.2
## [11] pillar_1.3.0 withr_2.1.2
## [13] glue_1.3.0 BiocParallel_1.14.2
## [15] rappdirs_0.3.1 BiocGenerics_0.26.0
## [17] bindrcpp_0.2.2 plyr_1.8.4
## [19] matrixStats_0.54.0 GenomeInfoDbData_1.1.0
## [21] bindr_0.1.1 stringr_1.3.1
## [23] zlibbioc_1.26.0 munsell_0.5.0
## [25] gtable_0.2.0 evaluate_0.11
## [27] labeling_0.3 Biobase_2.40.0
## [29] IRanges_2.14.11 GenomeInfoDb_1.16.0
## [31] parallel_3.5.1 curl_3.2
## [33] Rcpp_0.12.18 readr_1.1.1
## [35] scales_1.0.0 backports_1.1.2
## [37] DelayedArray_0.6.5 S4Vectors_0.18.3
## [39] jsonlite_1.5 XVector_0.20.0
## [41] hms_0.4.2 digest_0.6.16
## [43] stringi_1.2.4 bookdown_0.7
## [45] dplyr_0.7.6 GenomicRanges_1.32.6
## [47] grid_3.5.1 rprojroot_1.3-2
## [49] tools_3.5.1 bitops_1.0-6
## [51] RCurl_1.95-4.11 lazyeval_0.2.1
## [53] tibble_1.4.2 crayon_1.3.4
## [55] pkgconfig_2.0.2 Matrix_1.2-14
## [57] xml2_1.2.0 assertthat_0.2.0
## [59] rmarkdown_1.10 httr_1.3.1
## [61] R6_2.2.2 compiler_3.5.1
S3
object-oriented programming paradigm is used.