Drug Combination Benchmark Group

Drug combinations create promising new therapeutic opportunities for expanding the use of drugs and improving their efficacy. For instance, simultaneous modulation of multiple targets can address the issue of drug resistance seen in cancer treatments. However, experimentally exploring the space of possible drug combinations is not a feasible. For this reason, ML models capable of identifying candidate combinatorial treatments could be very valuable.

Synergy represents the deviation of observed drug combination response from expected effects had non-interaction. This benchmark group is defined on TDC.DrugComb dataset and evaluates ML models for their ability to predict five established synergy endpoints:

  • The main endpoint is called drug combination sensitivity score TDC.DrugComb_CSS, which is derived using the relative IC50 values of compounds and the areas under dose-response curves. For the CSS, we also provide information on tissue identity of cell lines where drug response was measured.
  • The Bliss model endpoint TDC.DrugComb_Bliss.
  • The highest single agent endpoint TDC.DrugComb_HSA.
  • The Loewe additivity model endpoint TDC.DrugComb_Loewe.
  • The zero interaction potency endpoint TDC.DrugComb_ZIP.

To retrieve names of all benchmarks in this group, type the following:

from tdc import utils
names = utils.retrieve_benchmark_names('DrugCombo_Group')
# ['drugcomb_css', 'drugcomb_hsa', ...]

To access a benchmark in the group, use the following code:

from tdc.benchmark_group import drugcombo_group
group = drugcombo_group(path = 'data/')

benchmark = group.get('Drugcomb_CSS')

predictions = {}
name = benchmark['name']
train_val, test = benchmark['train_val'], benchmark['test']

## --- train your model --- ##

predictions[name] = pred_test
out = group.evaluate(predictions)
print(out)
'''
Note that the output includes the automatic evaluations across tissues:

{'drugcomb_css': {'mae': 23.082},
 'drugcomb_css_kidney': {'mae': 21.906},
 'drugcomb_css_lung': {'mae': 21.341},
 'drugcomb_css_breast': {'mae': 18.542},
 'drugcomb_css_hematopoietic_lymphoid': {'mae': 40.55},
 'drugcomb_css_colon': {'mae': 25.224},
 'drugcomb_css_prostate': {'mae': 22.19},
 'drugcomb_css_ovary': {'mae': 19.638},
 'drugcomb_css_skin': {'mae': 18.777},
 'drugcomb_css_brain': {'mae': 21.855}}
'''

Follow the instructions on how to use the BenchmarkGroup class and obtain training, validation, and test sets, and how to submit your model to the leaderboard.

For every dataset in the benchmark, we use the drug combination split to partition the dataset into training, validation, and test sets. We hold out 20% data samples for the test set. The performance metric is MAE.

Note that tissue types are automatically determined based on test set prediction on the TDC.DrugComb_CSS endpoint.


Benchmark Data Summary


Dataset Size Task Metric Dataset Split
TDC.DrugComb_CSS 297,098 Regression MAE Combination
TDC.DrugComb_HSA 297,098 Regression MAE Combination
TDC.DrugComb_Loewe 297,098 Regression MAE Combination
TDC.DrugComb_Bliss 297,098 Regression MAE Combination
TDC.DrugComb_Zip 297,098 Regression MAE Combination