DerivaML Dataset¶
DerivaML is a class library built on the Deriva Scientific Asset management system that is designed to help simplify a number of the basic operations associated with building and testing ML libraries based on common toolkits such as TensorFlow. This notebook reviews the basic features of the DerivaML library.
Set up DerivaML for test case¶
%load_ext autoreload
%autoreload 2
from deriva.core.utils.globus_auth_utils import GlobusNativeLogin
from deriva_ml.demo_catalog import create_demo_catalog, DemoML
from deriva_ml import MLVocab, ExecutionConfiguration, Workflow, DerivaSystemColumns, VersionPart, DatasetSpec
import pandas as pd
from IPython.display import display, Markdown, HTML, JSON
Set the details for the catalog we want and authenticate to the server if needed.
hostname = 'localhost'
domain_schema = 'demo-schema'
gnl = GlobusNativeLogin(host=hostname)
if gnl.is_logged_in([hostname]):
print("You are already logged in.")
else:
gnl.login([hostname], no_local_server=True, no_browser=True, refresh_tokens=True, update_bdbag_keychain=True)
print("Login Successful")
Create a test catalog and get an instance of the DemoML class.
test_catalog = create_demo_catalog(hostname, domain_schema)
ml_instance = DemoML(hostname, test_catalog.catalog_id, use_minid=False)
Configure DerivaML Datasets¶
In Deriva-ML a dataset is used to aggregate instances of entities. However, before we can create any datasets, we must configure Deriva-ML for the specifics of the datasets. The first stp is we need to tell Deriva-ML what types of use defined objects can be associated with a dataset.
Note that out of the box, Deriva-ML is configured to allow datasets to contained dataset (i.e. nested datasets), so we don't need to do anything for that specific configuration.
print(f"Current dataset_table element types: {[a.name for a in ml_instance.list_dataset_element_types()]}")
ml_instance.add_dataset_element_type("Subject")
ml_instance.add_dataset_element_type("Image")
print(f"New dataset_table element types {[a.name for a in ml_instance.list_dataset_element_types()]}")
Now that we have configured our datasets, we need to identify the dataset types so we can distinguish between them.
# Create a new dataset_table
ml_instance.add_term(MLVocab.dataset_type, "DemoSet", description="A test dataset_table")
ml_instance.add_term(MLVocab.dataset_type, 'Partitioned', description="A partitioned dataset_table for ML training.")
ml_instance.add_term(MLVocab.dataset_type, "Subject", description="A test dataset_table")
ml_instance.add_term(MLVocab.dataset_type, "Image", description="A test dataset_table")
ml_instance.add_term(MLVocab.dataset_type, "Training", description="Training dataset_table")
ml_instance.add_term(MLVocab.dataset_type, "Testing", description="Training dataset_table")
ml_instance.add_term(MLVocab.dataset_type, "Validation", description="Validation dataset_table")
ml_instance.list_vocabulary_terms(MLVocab.dataset_type)
Now create datasets and populate with elements from the test catalogs.
ml_instance.add_term(MLVocab.workflow_type, "Create Dataset Notebook", description="A Workflow that creates a new dataset_table")
# Now lets create model configuration for our program.
api_workflow = Workflow(
name="API Workflow",
url="https://github.com/informatics-isi-edu/deriva-ml/blob/main/docs/Notebooks/DerivaML%20Dataset.ipynb",
workflow_type="Create Dataset Notebook"
)
dataset_execution = ml_instance.create_execution(
ExecutionConfiguration(
workflow=api_workflow,
description="Our Sample Workflow instance")
)
subject_dataset = dataset_execution.create_dataset(['DemoSet', 'Subject'], description="A subject dataset_table")
image_dataset = dataset_execution.create_dataset(['DemoSet', 'Image'], description="A image training dataset_table")
datasets = pd.DataFrame(ml_instance.find_datasets()).drop(columns=DerivaSystemColumns)
display(
Markdown('## Datasets'),
datasets)
And now that we have defined some datasets, we can add elements of the appropriate type to them. We can see what is in our new datasets by listing the dataset members.
# Get list of subjects and images from the catalog using the DataPath API.
dp = ml_instance.domain_path # Each call returns a new path instance, so only call once...
subject_rids = [i['RID'] for i in dp.tables['Subject'].entities().fetch()]
image_rids = [i['RID'] for i in dp.tables['Image'].entities().fetch()]
ml_instance.add_dataset_members(dataset_rid=subject_dataset, members=subject_rids)
ml_instance.add_dataset_members(dataset_rid=image_dataset, members=image_rids)
# List the contents of our datasets, and let's not include columns like modify time.
display(
Markdown('## Subject Dataset'),
pd.DataFrame(ml_instance.list_dataset_members(subject_dataset)['Subject']).drop(columns=DerivaSystemColumns),
Markdown('## Image Dataset'),
pd.DataFrame(ml_instance.list_dataset_members(image_dataset)['Image']).drop(columns=DerivaSystemColumns))
Create partitioned dataset¶
Now let's create some subsets of the original dataset based on subject level metadata. We are going to create the subsets based on the metadata values of the subjects. We will download the subject dataset and look at its metadata to figure out how to partition the original data. Since we are not going to look at the images, we use the materialize=False option to save some time.
dataset_bag = ml_instance.download_dataset_bag(DatasetSpec(rid=subject_dataset, version=ml_instance.dataset_version(subject_dataset), materialize=False))
print(f"Bag materialized")
The domain model has two objects: Subject and Images where an Image is associated with a subject, but a subject can have multiple images associated with it. Let's look at the subjects and partition into test and training datasets.
# Get information about the subjects.....
subject_df = dataset_bag.get_table_as_dataframe('Subject')[['RID', 'Name']]
image_df = dataset_bag.get_table_as_dataframe('Image')[['RID', 'Subject', 'URL']]
metadata_df = subject_df.join(image_df, lsuffix="_subject", rsuffix="_image")
display(metadata_df)
For ths example, lets partition the data based on the name of the subject. Of course in real examples, we would do a more complex analysis in deciding what subset goes into each data set.
def thing_number(name: pd.Series) -> pd.Series:
return name.map(lambda n: int(n.replace('Thing','')))
training_rids = metadata_df.loc[lambda df: thing_number(df['Name']) % 3 == 0]['RID_image'].tolist()
testing_rids = metadata_df.loc[lambda df: thing_number(df['Name']) % 3 == 1]['RID_image'].tolist()
validation_rids = metadata_df.loc[lambda df: thing_number(df['Name']) % 3 == 2]['RID_image'].tolist()
print(f'Training images: {training_rids}')
print(f'Testing images: {testing_rids}')
print(f'Validation images: {validation_rids}')
Now that we know what we want in each dataset, lets create datasets for each of our partitioned elements along with a nested dataset to track the entire collection.
nested_dataset = dataset_execution.create_dataset(['Partitioned', 'Image'], description='A nested dataset_table for machine learning')
training_dataset = dataset_execution.create_dataset('Training', description='An image dataset_table for training')
testing_dataset = dataset_execution.create_dataset('Testing', description='A image dataset_table for testing')
validation_dataset = dataset_execution.create_dataset('Validation', description='A image dataset_table for validation')
pd.DataFrame(ml_instance.find_datasets()).drop(columns=DerivaSystemColumns)
And then fill the datasets with the appropriate members.
ml_instance.add_dataset_members(dataset_rid=nested_dataset, members=[training_dataset, testing_dataset, validation_dataset])
ml_instance.add_dataset_members(dataset_rid=training_dataset, members=training_rids)
ml_instance.add_dataset_members(dataset_rid=testing_dataset, members=testing_rids)
ml_instance.add_dataset_members(dataset_rid=validation_dataset, members=validation_rids)
Ok, lets see what we have now.
As our very last step, lets get a PID that will allow us to share and cite the dataset that we just created
display(
Markdown('## Nested Dataset'),
pd.DataFrame(ml_instance.list_dataset_members(nested_dataset)['Dataset']).drop(columns=DerivaSystemColumns),
Markdown('## Training Dataset'),
pd.DataFrame(ml_instance.list_dataset_members(training_dataset)['Image']).drop(columns=DerivaSystemColumns),
Markdown('## Testing Dataset'),
pd.DataFrame(ml_instance.list_dataset_members(testing_dataset)['Image']).drop(columns=DerivaSystemColumns),
Markdown('## Validation Dataset'),
pd.DataFrame(ml_instance.list_dataset_members(validation_dataset)['Image']).drop(columns=DerivaSystemColumns),)
print(f'Dataset parents: {ml_instance.list_dataset_parents(training_dataset)}')
print(f'Dataset children: {ml_instance.list_dataset_children(nested_dataset)}')
dataset_citation = ml_instance.cite(nested_dataset)
display(
HTML(f'Nested dataset_table citation: <a href={dataset_citation}>{dataset_citation}</a>')
)
display(
Markdown('## Nested Dataset -- Recursive Listing'),
JSON(ml_instance.list_dataset_members(nested_dataset, recurse=True))
)
Dataset Versions¶
Datasets have a version number which can be retrieved or incremented. We follow the equivalent of semantic versioning, but for data rather than code. Note that datasets are also versioned by virtue of the fact that the dataset RID can include a catalog snapshot ID as well.
print(f'Current dataset_table version for training_dataset: {ml_instance.dataset_version(training_dataset)}')
next_version = ml_instance.increment_dataset_version(training_dataset, VersionPart.minor)
print(f'Next dataset_table version for training_dataset: {next_version}')
display(HTML(f'<a href={ml_instance.chaise_url("Dataset")}>Browse Datasets</a>'))
test_catalog.delete_ermrest_catalog(really=True)