DerivaML
Class
The DerivaML class provides a range of methods to interact with a Deriva catalog.
These methods assume tha tthe catalog contains a deriva-ml
and a domain schema.
Data Catalog: The catalog must include both the domain schema and a standard ML schema for effective data management.
- Domain schema: The domain schema includes the data collected or generated by domain-specific experiments or systems.
- ML schema: Each entity in the ML schema is designed to capture details of the ML development process. It including the following tables
- A Dataset represents a data collection, such as aggregation identified for training, validation, and testing purposes.
- A Workflow represents a specific sequence of computational steps or human interactions.
- An Execution is an instance of a workflow that a user instantiates at a specific time.
- An Execution Asset is an output file that results from the execution of a workflow.
- An Execution Metadata is an asset entity for saving metadata files referencing a given execution.
BuiltinTypes
Bases: Enum
ERMrest built-in data types.
Maps ERMrest's built-in data types to their type names. These types are used for defining column types in tables and for type validation.
Attributes:
Name | Type | Description |
---|---|---|
text |
str
|
Text/string type. |
int2 |
str
|
16-bit integer. |
jsonb |
str
|
Binary JSON. |
float8 |
str
|
64-bit float. |
timestamp |
str
|
Timestamp without timezone. |
int8 |
str
|
64-bit integer. |
boolean |
str
|
Boolean type. |
json |
str
|
JSON type. |
float4 |
str
|
32-bit float. |
int4 |
str
|
32-bit integer. |
timestamptz |
str
|
Timestamp with timezone. |
date |
str
|
Date type. |
ermrest_rid |
str
|
Resource identifier. |
ermrest_rcb |
str
|
Record created by. |
ermrest_rmb |
str
|
Record modified by. |
ermrest_rct |
str
|
Record creation time. |
ermrest_rmt |
str
|
Record modification time. |
markdown |
str
|
Markdown text. |
longtext |
str
|
Long text. |
ermrest_curie |
str
|
Compact URI. |
ermrest_uri |
str
|
URI type. |
color_rgb_hex |
str
|
RGB color in hex. |
serial2 |
str
|
16-bit auto-incrementing. |
serial4 |
str
|
32-bit auto-incrementing. |
serial8 |
str
|
64-bit auto-incrementing. |
Source code in src/deriva_ml/core/enums.py
88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 |
|
ColumnDefinition
Bases: BaseModel
Defines a column in an ERMrest table.
Provides a Pydantic model for defining columns with their types, constraints, and metadata. Maps to deriva_py's Column.define functionality.
Attributes:
Name | Type | Description |
---|---|---|
name |
str
|
Name of the column. |
type |
BuiltinTypes
|
ERMrest data type for the column. |
nullok |
bool
|
Whether NULL values are allowed. Defaults to True. |
default |
Any
|
Default value for the column. |
comment |
str | None
|
Description of the column's purpose. |
acls |
dict
|
Access control lists. |
acl_bindings |
dict
|
Dynamic access control bindings. |
annotations |
dict
|
Additional metadata annotations. |
Example
col = ColumnDefinition( ... name="score", ... type=BuiltinTypes.float4, ... nullok=False, ... comment="Confidence score between 0 and 1" ... )
Source code in src/deriva_ml/core/ermrest.py
97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 |
|
DerivaML
Bases: Dataset
Core class for machine learning operations on a Deriva catalog.
This class provides core functionality for managing ML workflows, features, and datasets in a Deriva catalog. It handles data versioning, feature management, vocabulary control, and execution tracking.
Attributes:
Name | Type | Description |
---|---|---|
host_name |
str
|
Hostname of the Deriva server (e.g., 'deriva.example.org'). |
catalog_id |
Union[str, int]
|
Catalog identifier or name. |
domain_schema |
str
|
Schema name for domain-specific tables and relationships. |
model |
DerivaModel
|
ERMRest model for the catalog. |
working_dir |
Path
|
Directory for storing computation data and results. |
cache_dir |
Path
|
Directory for caching downloaded datasets. |
ml_schema |
str
|
Schema name for ML-specific tables (default: 'deriva_ml'). |
configuration |
ExecutionConfiguration
|
Current execution configuration. |
project_name |
str
|
Name of the current project. |
start_time |
datetime
|
Timestamp when this instance was created. |
status |
str
|
Current status of operations. |
Example
ml = DerivaML('deriva.example.org', 'my_catalog') ml.create_feature('my_table', 'new_feature') ml.add_term('vocabulary_table', 'new_term', description='Description of term')
Source code in src/deriva_ml/core/base.py
86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 |
|
domain_path
property
domain_path: DataPath
Returns path builder for domain schema.
Provides a convenient way to access tables and construct queries within the domain-specific schema.
Returns:
Type | Description |
---|---|
DataPath
|
datapath._CatalogWrapper: Path builder object scoped to the domain schema. |
Example
domain = ml.domain_path results = domain.my_table.entities().fetch()
pathBuilder
property
pathBuilder: _SchemaWrapper
Returns catalog path builder for queries.
The path builder provides a fluent interface for constructing complex queries against the catalog. This is a core component used by many other methods to interact with the catalog.
Returns:
Type | Description |
---|---|
_SchemaWrapper
|
datapath._CatalogWrapper: A new instance of the catalog path builder. |
Example
path = ml.pathBuilder.schemas['my_schema'].tables['my_table'] results = path.entities().fetch()
__del__
__del__()
Cleanup method to handle incomplete executions.
Source code in src/deriva_ml/core/base.py
192 193 194 195 196 197 198 199 |
|
__init__
__init__(
hostname: str,
catalog_id: str | int,
domain_schema: str | None = None,
project_name: str | None = None,
cache_dir: str | Path | None = None,
working_dir: str
| Path
| None = None,
ml_schema: str = ML_SCHEMA,
logging_level=logging.WARNING,
credential=None,
use_minid: bool = True,
)
Initializes a DerivaML instance.
This method will connect to a catalog and initialize local configuration for the ML execution. This class is intended to be used as a base class on which domain-specific interfaces are built.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
hostname
|
str
|
Hostname of the Deriva server. |
required |
catalog_id
|
str | int
|
Catalog ID. Either an identifier or a catalog name. |
required |
domain_schema
|
str | None
|
Schema name for domain-specific tables and relationships. Defaults to the name of the schema that is not one of the standard schemas. If there is more than one user-defined schema, then this argument must be provided a value. |
None
|
ml_schema
|
str
|
Schema name for ML schema. Used if you have a non-standard configuration of deriva-ml. |
ML_SCHEMA
|
project_name
|
str | None
|
Project name. Defaults to name of domain schema. |
None
|
cache_dir
|
str | Path | None
|
Directory path for caching data downloaded from the Deriva server as bdbag. |
None
|
working_dir
|
str | Path | None
|
Directory path for storing data used by or generated by any computations. |
None
|
use_minid
|
bool
|
Use the MINID service when downloading dataset bags. |
True
|
Source code in src/deriva_ml/core/base.py
111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 |
|
add_dataset_element_type
add_dataset_element_type(
element: str | Table,
) -> Table
A dataset_table is a heterogeneous collection of objects, each of which comes from a different table. This routine makes it possible to add objects from the specified table to a dataset_table.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
element
|
str | Table
|
Name of the table or table object that is to be added to the dataset_table. |
required |
Returns:
Type | Description |
---|---|
Table
|
The table object that was added to the dataset_table. |
Source code in src/deriva_ml/dataset/dataset.py
496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 |
|
add_dataset_members
add_dataset_members(
dataset_rid: RID,
members: list[RID]
| dict[str, list[RID]],
validate: bool = True,
description: str | None = "",
execution_rid: RID | None = None,
) -> None
Adds members to a dataset.
Associates one or more records with a dataset. Can optionally validate member types and create a new dataset version to track the changes.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_rid
|
RID
|
Resource Identifier of the dataset. |
required |
members
|
list[RID] | dict[str, list[RID]]
|
List of RIDs to add as dataset members. Can be orginized into a dictionary that indicates the table that the member rids belong to. |
required |
validate
|
bool
|
Whether to validate member types. Defaults to True. |
True
|
description
|
str | None
|
Optional description of the member additions. |
''
|
execution_rid
|
RID | None
|
Optional execution RID to associate with changes. |
None
|
Raises:
Type | Description |
---|---|
DerivaMLException
|
If: - dataset_rid is invalid - members are invalid or of wrong type - adding members would create a cycle - validation fails |
Example
ml.add_dataset_members( ... dataset_rid="1-abc123", ... members=["1-def456", "1-ghi789"], ... description="Added sample data" ... )
Source code in src/deriva_ml/dataset/dataset.py
588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 |
|
add_files
add_files(
files: Iterable[FileSpec],
dataset_types: str
| list[str]
| None = None,
description: str = "",
execution_rid: RID | None = None,
) -> RID
Adds files to the catalog with their metadata.
Registers files in the catalog along with their metadata (MD5, length, URL) and associates them with specified file types. Optionally links files to an execution record.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
files
|
Iterable[FileSpec]
|
File specifications containing MD5 checksum, length, and URL. |
required |
dataset_types
|
str | list[str] | None
|
One or more dataset type terms from File_Type vocabulary. |
None
|
description
|
str
|
Description of the files. |
''
|
execution_rid
|
RID | None
|
Optional execution RID to associate files with. |
None
|
Returns:
Name | Type | Description |
---|---|---|
RID |
RID
|
Resource of dataset that represents the newly added files. |
Raises:
Type | Description |
---|---|
DerivaMLException
|
If file_types are invalid or execution_rid is not an execution record. |
Examples:
Add a single file type: >>> files = [FileSpec(url="path/to/file.txt", md5="abc123", length=1000)] >>> rids = ml.add_files(files, file_types="text")
Add multiple file types: >>> rids = ml.add_files( ... files=[FileSpec(url="image.png", md5="def456", length=2000)], ... file_types=["image", "png"], ... execution_rid="1-xyz789" ... )
Source code in src/deriva_ml/core/base.py
1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 |
|
add_page
add_page(
title: str, content: str
) -> None
Adds page to web interface.
Creates a new page in the catalog's web interface with the specified title and content. The page will be accessible through the catalog's navigation system.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
title
|
str
|
The title of the page to be displayed in navigation and headers. |
required |
content
|
str
|
The main content of the page can include HTML markup. |
required |
Raises:
Type | Description |
---|---|
DerivaMLException
|
If the page creation fails or the user lacks necessary permissions. |
Example
ml.add_page( ... title="Analysis Results", ... content="
Results
Analysis completed successfully...
" ... )
Source code in src/deriva_ml/core/base.py
473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 |
|
add_term
add_term(
table: str | Table,
term_name: str,
description: str,
synonyms: list[str] | None = None,
exists_ok: bool = True,
) -> VocabularyTerm
Adds a term to a vocabulary table.
Creates a new standardized term with description and optional synonyms in a vocabulary table. Can either create a new term or return an existing one if it already exists.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
table
|
str | Table
|
Vocabulary table to add term to (name or Table object). |
required |
term_name
|
str
|
Primary name of the term (must be unique within vocabulary). |
required |
description
|
str
|
Explanation of term's meaning and usage. |
required |
synonyms
|
list[str] | None
|
Alternative names for the term. |
None
|
exists_ok
|
bool
|
If True, return the existing term if found. If False, raise error. |
True
|
Returns:
Name | Type | Description |
---|---|---|
VocabularyTerm |
VocabularyTerm
|
Object representing the created or existing term. |
Raises:
Type | Description |
---|---|
DerivaMLException
|
If a term exists and exists_ok=False, or if the table is not a vocabulary table. |
Examples:
Add a new tissue type: >>> term = ml.add_term( ... table="tissue_types", ... term_name="epithelial", ... description="Epithelial tissue type", ... synonyms=["epithelium"] ... )
Attempt to add an existing term: >>> term = ml.add_term("tissue_types", "epithelial", "...", exists_ok=True)
Source code in src/deriva_ml/core/base.py
882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 |
|
add_workflow
add_workflow(workflow: Workflow) -> RID
Adds a workflow to the catalog.
Registers a new workflow in the catalog or returns the RID of an existing workflow with the same URL or checksum.
Each workflow represents a specific computational process or analysis pipeline.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
workflow
|
Workflow
|
Workflow object containing name, URL, type, version, and description. |
required |
Returns:
Name | Type | Description |
---|---|---|
RID |
RID
|
Resource Identifier of the added or existing workflow. |
Raises:
Type | Description |
---|---|
DerivaMLException
|
If workflow insertion fails or required fields are missing. |
Examples:
>>> workflow = Workflow(
... name="Gene Analysis",
... url="https://github.com/org/repo/workflows/gene_analysis.py",
... workflow_type="python_script",
... version="1.0.0",
... description="Analyzes gene expression patterns"
... )
>>> workflow_rid = ml.add_workflow(workflow)
Source code in src/deriva_ml/core/base.py
1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 |
|
chaise_url
chaise_url(
table: RID | Table | str,
) -> str
Generates Chaise web interface URL.
Chaise is Deriva's web interface for data exploration. This method creates a URL that directly links to the specified table or record.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
table
|
RID | Table | str
|
Table to generate URL for (name, Table object, or RID). |
required |
Returns:
Name | Type | Description |
---|---|---|
str |
str
|
URL in format: https://{host}/chaise/recordset/#{catalog}/{schema}:{table} |
Raises:
Type | Description |
---|---|
DerivaMLException
|
If table or RID cannot be found. |
Examples:
Using table name: >>> ml.chaise_url("experiment_table") 'https://deriva.org/chaise/recordset/#1/schema:experiment_table'
Using RID: >>> ml.chaise_url("1-abc123")
Source code in src/deriva_ml/core/base.py
332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 |
|
cite
cite(
entity: Dict[str, Any] | str,
) -> str
Generates permanent citation URL.
Creates a versioned URL that can be used to reference a specific entity in the catalog. The URL includes the catalog snapshot time to ensure version stability.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
entity
|
Dict[str, Any] | str
|
Either a RID string or a dictionary containing entity data with a 'RID' key. |
required |
Returns:
Name | Type | Description |
---|---|---|
str |
str
|
Permanent citation URL in format: https://{host}/id/{catalog}/{rid}@{snapshot_time} |
Raises:
Type | Description |
---|---|
DerivaMLException
|
If an entity doesn't exist or lacks a RID. |
Examples:
Using a RID string: >>> url = ml.cite("1-abc123") >>> print(url) 'https://deriva.org/id/1/1-abc123@2024-01-01T12:00:00'
Using a dictionary: >>> url = ml.cite({"RID": "1-abc123"})
Source code in src/deriva_ml/core/base.py
364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 |
|
create_asset
create_asset(
asset_name: str,
column_defs: Iterable[
ColumnDefinition
]
| None = None,
fkey_defs: Iterable[
ColumnDefinition
]
| None = None,
referenced_tables: Iterable[Table]
| None = None,
comment: str = "",
schema: str | None = None,
) -> Table
Creates an asset table.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
asset_name
|
str
|
Name of the asset table. |
required |
column_defs
|
Iterable[ColumnDefinition] | None
|
Iterable of ColumnDefinition objects to provide additional metadata for asset. |
None
|
fkey_defs
|
Iterable[ColumnDefinition] | None
|
Iterable of ForeignKeyDefinition objects to provide additional metadata for asset. |
None
|
referenced_tables
|
Iterable[Table] | None
|
Iterable of Table objects to which asset should provide foreign-key references to. |
None
|
comment
|
str
|
Description of the asset table. (Default value = '') |
''
|
schema
|
str | None
|
Schema in which to create the asset table. Defaults to domain_schema. |
None
|
Returns:
Type | Description |
---|---|
Table
|
Table object for the asset table. |
Source code in src/deriva_ml/core/base.py
562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 |
|
create_dataset
create_dataset(
dataset_types: str
| list[str]
| None = None,
description: str = "",
execution_rid: RID | None = None,
version: DatasetVersion
| None = None,
) -> RID
Creates a new dataset in the catalog.
Creates a dataset with specified types and description. The dataset can be associated with an execution and initialized with a specific version.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_types
|
str | list[str] | None
|
One or more dataset type terms from Dataset_Type vocabulary. |
None
|
description
|
str
|
Description of the dataset's purpose and contents. |
''
|
execution_rid
|
RID | None
|
Optional execution RID to associate with dataset creation. |
None
|
version
|
DatasetVersion | None
|
Optional initial version number. Defaults to 0.1.0. |
None
|
Returns:
Name | Type | Description |
---|---|---|
RID |
RID
|
Resource Identifier of the newly created dataset. |
Raises:
Type | Description |
---|---|
DerivaMLException
|
If dataset_types are invalid or creation fails. |
Example
rid = ml.create_dataset( ... dataset_types=["experiment", "raw_data"], ... description="RNA sequencing experiment data", ... version=DatasetVersion(1, 0, 0) ... )
Source code in src/deriva_ml/dataset/dataset.py
353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 |
|
create_execution
create_execution(
configuration: ExecutionConfiguration,
dry_run: bool = False,
) -> "Execution"
Creates an execution environment.
Given an execution configuration, initialize the local compute environment to prepare for executing an ML or analytic routine. This routine has a number of side effects.
-
The datasets specified in the configuration are downloaded and placed in the cache-dir. If a version is not specified in the configuration, then a new minor version number is created for the dataset and downloaded.
-
If any execution assets are provided in the configuration, they are downloaded and placed in the working directory.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
configuration
|
ExecutionConfiguration
|
ExecutionConfiguration: |
required |
dry_run
|
bool
|
Do not create an execution record or upload results. |
False
|
Returns:
Type | Description |
---|---|
'Execution'
|
An execution object. |
Source code in src/deriva_ml/core/base.py
1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 |
|
create_feature
create_feature(
target_table: Table | str,
feature_name: str,
terms: list[Table | str]
| None = None,
assets: list[Table | str]
| None = None,
metadata: list[
ColumnDefinition
| Table
| Key
| str
]
| None = None,
optional: list[str] | None = None,
comment: str = "",
) -> type[FeatureRecord]
Creates a new feature definition.
A feature represents a measurable property or characteristic that can be associated with records in the target table. Features can include vocabulary terms, asset references, and additional metadata.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
target_table
|
Table | str
|
Table to associate the feature with (name or Table object). |
required |
feature_name
|
str
|
Unique name for the feature within the target table. |
required |
terms
|
list[Table | str] | None
|
Optional vocabulary tables/names whose terms can be used as feature values. |
None
|
assets
|
list[Table | str] | None
|
Optional asset tables/names that can be referenced by this feature. |
None
|
metadata
|
list[ColumnDefinition | Table | Key | str] | None
|
Optional columns, tables, or keys to include in a feature definition. |
None
|
optional
|
list[str] | None
|
Column names that are not required when creating feature instances. |
None
|
comment
|
str
|
Description of the feature's purpose and usage. |
''
|
Returns:
Type | Description |
---|---|
type[FeatureRecord]
|
type[FeatureRecord]: Feature class for creating validated instances. |
Raises:
Type | Description |
---|---|
DerivaMLException
|
If a feature definition is invalid or conflicts with existing features. |
Examples:
Create a feature with confidence score: >>> feature_class = ml.create_feature( ... target_table="samples", ... feature_name="expression_level", ... terms=["expression_values"], ... metadata=[ColumnDefinition(name="confidence", type=BuiltinTypes.float4)], ... comment="Gene expression measurement" ... )
Source code in src/deriva_ml/core/base.py
689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 |
|
create_table
create_table(
table: TableDefinition,
) -> Table
Creates a new table in the catalog.
Creates a table using the provided TableDefinition object, which specifies the table structure including columns, keys, and foreign key relationships.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
table
|
TableDefinition
|
A TableDefinition object containing the complete specification of the table to create. |
required |
Returns:
Name | Type | Description |
---|---|---|
Table |
Table
|
The newly created ERMRest table object. |
Raises:
Type | Description |
---|---|
DerivaMLException
|
If table creation fails or the definition is invalid. |
Example:
>>> table_def = TableDefinition(
... name="experiments",
... column_definitions=[
... ColumnDefinition(name="name", type=BuiltinTypes.text),
... ColumnDefinition(name="date", type=BuiltinTypes.date)
... ]
... )
>>> new_table = ml.create_table(table_def)
Source code in src/deriva_ml/core/base.py
533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 |
|
create_vocabulary
create_vocabulary(
vocab_name: str,
comment: str = "",
schema: str | None = None,
) -> Table
Creates a controlled vocabulary table.
A controlled vocabulary table maintains a list of standardized terms and their definitions. Each term can have synonyms and descriptions to ensure consistent terminology usage across the dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
vocab_name
|
str
|
Name for the new vocabulary table. Must be a valid SQL identifier. |
required |
comment
|
str
|
Description of the vocabulary's purpose and usage. Defaults to empty string. |
''
|
schema
|
str | None
|
Schema name to create the table in. If None, uses domain_schema. |
None
|
Returns:
Name | Type | Description |
---|---|---|
Table |
Table
|
ERMRest table object representing the newly created vocabulary table. |
Raises:
Type | Description |
---|---|
DerivaMLException
|
If vocab_name is invalid or already exists. |
Examples:
Create a vocabulary for tissue types:
>>> table = ml.create_vocabulary(
... vocab_name="tissue_types",
... comment="Standard tissue classifications",
... schema="bio_schema"
... )
Source code in src/deriva_ml/core/base.py
495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 |
|
create_workflow
create_workflow(
name: str,
workflow_type: str,
description: str = "",
) -> Workflow
Creates a new workflow definition.
Creates a Workflow object that represents a computational process or analysis pipeline. The workflow type must be a term from the controlled vocabulary. This method is typically used to define new analysis workflows before execution.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name
|
str
|
Name of the workflow. |
required |
workflow_type
|
str
|
Type of workflow (must exist in workflow_type vocabulary). |
required |
description
|
str
|
Description of what the workflow does. |
''
|
Returns:
Name | Type | Description |
---|---|---|
Workflow |
Workflow
|
New workflow object ready for registration. |
Raises:
Type | Description |
---|---|
DerivaMLException
|
If workflow_type is not in the vocabulary. |
Examples:
>>> workflow = ml.create_workflow(
... name="RNA Analysis",
... workflow_type="python_notebook",
... description="RNA sequence analysis pipeline"
... )
>>> rid = ml.add_workflow(workflow)
Source code in src/deriva_ml/core/base.py
1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 |
|
dataset_history
dataset_history(
dataset_rid: RID,
) -> list[DatasetHistory]
Retrieves the version history of a dataset.
Returns a chronological list of dataset versions, including their version numbers, creation times, and associated metadata.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_rid
|
RID
|
Resource Identifier of the dataset. |
required |
Returns:
Type | Description |
---|---|
list[DatasetHistory]
|
list[DatasetHistory]: List of history entries, each containing: - dataset_version: Version number (major.minor.patch) - minid: Minimal Viable Identifier - snapshot: Catalog snapshot time - dataset_rid: Dataset Resource Identifier - version_rid: Version Resource Identifier - description: Version description - execution_rid: Associated execution RID |
Raises:
Type | Description |
---|---|
DerivaMLException
|
If dataset_rid is not a valid dataset RID. |
Example
history = ml.dataset_history("1-abc123") for entry in history: ... print(f"Version {entry.dataset_version}: {entry.description}")
Source code in src/deriva_ml/dataset/dataset.py
221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 |
|
dataset_version
dataset_version(
dataset_rid: RID,
) -> DatasetVersion
Retrieve the current version of the specified dataset_table.
Given a rid, return the most recent version of the dataset. It is important to remember that this version captures the state of the catalog at the time the version was created, not the current state of the catalog. This means that its possible that the values associated with an object in the catalog may be different from the values of that object in the dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_rid
|
RID
|
The RID of the dataset to retrieve the version for. |
required |
Returns:
Type | Description |
---|---|
DatasetVersion
|
A tuple with the semantic version of the dataset_table. |
Source code in src/deriva_ml/dataset/dataset.py
265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 |
|
delete_dataset
delete_dataset(
dataset_rid: RID,
recurse: bool = False,
) -> None
Delete a dataset_table from the catalog.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_rid
|
RID
|
RID of the dataset_table to delete. |
required |
recurse
|
bool
|
If True, delete the dataset_table along with any nested datasets. (Default value = False) |
False
|
Source code in src/deriva_ml/dataset/dataset.py
429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 |
|
delete_dataset_members
delete_dataset_members(
dataset_rid: RID,
members: list[RID],
description: str = "",
execution_rid: RID | None = None,
) -> None
Remove elements to an existing dataset_table.
Delete elements from an existing dataset. In addition to deleting members, the minor version number of the dataset is incremented and the description, if provide is applied to that new version.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_rid
|
RID
|
RID of dataset_table to extend or None if a new dataset_table is to be created. |
required |
members
|
list[RID]
|
List of member RIDs to add to the dataset_table. |
required |
description
|
str
|
Markdown description of the updated dataset. |
''
|
execution_rid
|
RID | None
|
Optional RID of execution associated with this operation. |
None
|
Source code in src/deriva_ml/dataset/dataset.py
685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 |
|
delete_feature
delete_feature(
table: Table | str,
feature_name: str,
) -> bool
Removes a feature definition and its data.
Deletes the feature and its implementation table from the catalog. This operation cannot be undone and will remove all feature values associated with this feature.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
table
|
Table | str
|
The table containing the feature, either as name or Table object. |
required |
feature_name
|
str
|
Name of the feature to delete. |
required |
Returns:
Name | Type | Description |
---|---|---|
bool |
bool
|
True if the feature was successfully deleted, False if it didn't exist. |
Raises:
Type | Description |
---|---|
DerivaMLException
|
If deletion fails due to constraints or permissions. |
Example
success = ml.delete_feature("samples", "obsolete_feature") print("Deleted" if success else "Not found")
Source code in src/deriva_ml/core/base.py
799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 |
|
download_dataset_bag
download_dataset_bag(
dataset: DatasetSpec,
execution_rid: RID | None = None,
) -> DatasetBag
Downloads a dataset to the local filesystem and creates a MINID if needed.
Downloads a dataset specified by DatasetSpec to the local filesystem. If the dataset doesn't have a MINID (Minimal Viable Identifier), one will be created. The dataset can optionally be associated with an execution record.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset
|
DatasetSpec
|
Specification of the dataset to download, including version and materialization options. |
required |
execution_rid
|
RID | None
|
Optional execution RID to associate the download with. |
None
|
Returns:
Name | Type | Description |
---|---|---|
DatasetBag |
DatasetBag
|
Object containing: - path: Local filesystem path to downloaded dataset - rid: Dataset's Resource Identifier - minid: Dataset's Minimal Viable Identifier |
Examples:
Download with default options: >>> spec = DatasetSpec(rid="1-abc123") >>> bag = ml.download_dataset_bag(dataset=spec) >>> print(f"Downloaded to {bag.path}")
Download with execution tracking: >>> bag = ml.download_dataset_bag( ... dataset=DatasetSpec(rid="1-abc123", materialize=True), ... execution_rid="1-xyz789" ... )
Source code in src/deriva_ml/core/base.py
1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 |
|
download_dir
download_dir(
cached: bool = False,
) -> Path
Returns the appropriate download directory.
Provides the appropriate directory path for storing downloaded files, either in the cache or working directory.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
cached
|
bool
|
If True, returns the cache directory path. If False, returns the working directory path. |
False
|
Returns:
Name | Type | Description |
---|---|---|
Path |
Path
|
Directory path where downloaded files should be stored. |
Example
cache_dir = ml.download_dir(cached=True) work_dir = ml.download_dir(cached=False)
Source code in src/deriva_ml/core/base.py
287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 |
|
feature_record_class
feature_record_class(
table: str | Table,
feature_name: str,
) -> type[FeatureRecord]
Returns a pydantic model class for feature records.
Creates a typed interface for creating new instances of the specified feature. The returned class includes validation and type checking based on the feature's definition.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
table
|
str | Table
|
The table containing the feature, either as name or Table object. |
required |
feature_name
|
str
|
Name of the feature to create a record class for. |
required |
Returns:
Type | Description |
---|---|
type[FeatureRecord]
|
type[FeatureRecord]: A pydantic model class for creating validated feature records. |
Raises:
Type | Description |
---|---|
DerivaMLException
|
If the feature doesn't exist or the table is invalid. |
Example
ExpressionFeature = ml.feature_record_class("samples", "expression_level") feature = ExpressionFeature(value="high", confidence=0.95)
Source code in src/deriva_ml/core/base.py
776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 |
|
find_datasets
find_datasets(
deleted: bool = False,
) -> Iterable[dict[str, Any]]
Returns a list of currently available datasets.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
deleted
|
bool
|
If True, included the datasets that have been deleted. |
False
|
Returns:
Type | Description |
---|---|
Iterable[dict[str, Any]]
|
list of currently available datasets. |
Source code in src/deriva_ml/dataset/dataset.py
451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 |
|
globus_login
staticmethod
globus_login(host: str) -> None
Authenticates with Globus for accessing Deriva services.
Performs authentication using Globus Auth to access Deriva services. If already logged in, notifies the user. Uses non-interactive authentication flow without a browser or local server.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
host
|
str
|
The hostname of the Deriva server to authenticate with (e.g., 'deriva.example.org'). |
required |
Example
DerivaML.globus_login('deriva.example.org') 'Login Successful'
Source code in src/deriva_ml/core/base.py
305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 |
|
increment_dataset_version
increment_dataset_version(
dataset_rid: RID,
component: VersionPart,
description: str | None = "",
execution_rid: RID | None = None,
) -> DatasetVersion
Increments a dataset's version number.
Creates a new version of the dataset by incrementing the specified version component (major, minor, or patch). The new version is recorded with an optional description and execution reference.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_rid
|
RID
|
Resource Identifier of the dataset to version. |
required |
component
|
VersionPart
|
Which version component to increment ('major', 'minor', or 'patch'). |
required |
description
|
str | None
|
Optional description of the changes in this version. |
''
|
execution_rid
|
RID | None
|
Optional execution RID to associate with this version. |
None
|
Returns:
Name | Type | Description |
---|---|---|
DatasetVersion |
DatasetVersion
|
The new version number. |
Raises:
Type | Description |
---|---|
DerivaMLException
|
If dataset_rid is invalid or version increment fails. |
Example
new_version = ml.increment_dataset_version( ... dataset_rid="1-abc123", ... component="minor", ... description="Added new samples" ... ) print(f"New version: {new_version}") # e.g., "1.2.0"
Source code in src/deriva_ml/dataset/dataset.py
306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 |
|
list_assets
list_assets(
asset_table: Table | str,
) -> list[dict[str, Any]]
Lists contents of an asset table.
Returns a list of assets with their types for the specified asset table.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
asset_table
|
Table | str
|
Table or name of the asset table to list assets for. |
required |
Returns:
Type | Description |
---|---|
list[dict[str, Any]]
|
list[dict[str, Any]]: List of asset records, each containing: - RID: Resource identifier - Type: Asset type - Metadata: Asset metadata |
Raises:
Type | Description |
---|---|
DerivaMLException
|
If the table is not an asset table or doesn't exist. |
Example
assets = ml.list_assets("tissue_types") for asset in assets: ... print(f"{asset['RID']}: {asset['Type']}")
Source code in src/deriva_ml/core/base.py
637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 |
|
list_dataset_children
list_dataset_children(
dataset_rid: RID,
recurse: bool = False,
) -> list[RID]
Given a dataset_table RID, return a list of RIDs for any nested datasets.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_rid
|
RID
|
A dataset_table RID. |
required |
recurse
|
bool
|
If True, return a list of nested datasets RIDs. |
False
|
Returns:
Type | Description |
---|---|
list[RID]
|
list of nested dataset RIDs. |
Source code in src/deriva_ml/dataset/dataset.py
762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 |
|
list_dataset_element_types
list_dataset_element_types() -> (
Iterable[Table]
)
List the types of entities that can be added to a dataset_table.
Returns:
Type | Description |
---|---|
Iterable[Table]
|
return: An iterable of Table objects that can be included as an element of a dataset_table. |
Source code in src/deriva_ml/dataset/dataset.py
484 485 486 487 488 489 490 491 492 493 494 |
|
list_dataset_members
list_dataset_members(
dataset_rid: RID,
recurse: bool = False,
limit: int | None = None,
) -> dict[str, list[dict[str, Any]]]
Lists members of a dataset.
Returns a dictionary mapping member types to lists of member records. Can optionally recurse through nested datasets and limit the number of results.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_rid
|
RID
|
Resource Identifier of the dataset. |
required |
recurse
|
bool
|
Whether to include members of nested datasets. Defaults to False. |
False
|
limit
|
int | None
|
Maximum number of members to return per type. None for no limit. |
None
|
Returns:
Type | Description |
---|---|
dict[str, list[dict[str, Any]]]
|
dict[str, list[dict[str, Any]]]: Dictionary mapping member types to lists of members. Each member is a dictionary containing the record's attributes. |
Raises:
Type | Description |
---|---|
DerivaMLException
|
If dataset_rid is invalid. |
Example
members = ml.list_dataset_members("1-abc123", recurse=True) for type_name, records in members.items(): ... print(f"{type_name}: {len(records)} records")
Source code in src/deriva_ml/dataset/dataset.py
524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 |
|
list_dataset_parents
list_dataset_parents(
dataset_rid: RID,
) -> list[str]
Given a dataset_table RID, return a list of RIDs of the parent datasets if this is included in a nested dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_rid
|
RID
|
return: RID of the parent dataset_table. |
required |
Returns:
Type | Description |
---|---|
list[str]
|
RID of the parent dataset_table. |
Source code in src/deriva_ml/dataset/dataset.py
744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 |
|
list_feature_values
list_feature_values(
table: Table | str,
feature_name: str,
) -> datapath._ResultSet
Retrieves all values for a feature.
Returns all instances of the specified feature that have been created, including their associated metadata and references.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
table
|
Table | str
|
The table containing the feature, either as name or Table object. |
required |
feature_name
|
str
|
Name of the feature to retrieve values for. |
required |
Returns:
Type | Description |
---|---|
_ResultSet
|
datapath._ResultSet: A result set containing all feature values and their metadata. |
Raises:
Type | Description |
---|---|
DerivaMLException
|
If the feature doesn't exist or cannot be accessed. |
Example
values = ml.list_feature_values("samples", "expression_level") for value in values: ... print(f"Sample {value['RID']}: {value['value']}")
Source code in src/deriva_ml/core/base.py
852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 |
|
list_files
list_files(
file_types: list[str] | None = None,
) -> list[dict[str, Any]]
Lists files in the catalog with their metadata.
Returns a list of files with their metadata including URL, MD5 hash, length, description, and associated file types. Files can be optionally filtered by type.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
file_types
|
list[str] | None
|
Filter results to only include these file types. |
None
|
Returns:
Type | Description |
---|---|
list[dict[str, Any]]
|
list[dict[str, Any]]: List of file records, each containing: - RID: Resource identifier - URL: File location - MD5: File hash - Length: File size - Description: File description - File_Types: List of associated file types |
Examples:
List all files: >>> files = ml.list_files() >>> for f in files: ... print(f"{f['RID']}: {f['URL']}")
Filter by file type: >>> image_files = ml.list_files(["image", "png"])
Source code in src/deriva_ml/core/base.py
1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 |
|
list_vocabulary_terms
list_vocabulary_terms(
table: str | Table,
) -> list[VocabularyTerm]
Lists all terms in a vocabulary table.
Retrieves all terms, their descriptions, and synonyms from a controlled vocabulary table.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
table
|
str | Table
|
Vocabulary table to list terms from (name or Table object). |
required |
Returns:
Type | Description |
---|---|
list[VocabularyTerm]
|
list[VocabularyTerm]: List of vocabulary terms with their metadata. |
Raises:
Type | Description |
---|---|
DerivaMLException
|
If table doesn't exist or is not a vocabulary table. |
Examples:
>>> terms = ml.list_vocabulary_terms("tissue_types")
>>> for term in terms:
... print(f"{term.name}: {term.description}")
... if term.synonyms:
... print(f" Synonyms: {', '.join(term.synonyms)}")
Source code in src/deriva_ml/core/base.py
999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 |
|
list_workflows
list_workflows() -> list[Workflow]
Lists all workflows in the catalog.
Retrieves all workflow definitions, including their names, URLs, types, versions, and descriptions.
Returns:
Type | Description |
---|---|
list[Workflow]
|
list[Workflow]: List of workflow objects, each containing: - name: Workflow name - url: Source code URL - workflow_type: Type of workflow - version: Version identifier - description: Workflow description - rid: Resource identifier - checksum: Source code checksum |
Examples:
>>> workflows = ml.list_workflows()
>>> for w in workflows:
print(f"{w.name} (v{w.version}): {w.description}")
print(f" Source: {w.url}")
Source code in src/deriva_ml/core/base.py
1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 |
|
lookup_feature
lookup_feature(
table: str | Table,
feature_name: str,
) -> Feature
Retrieves a Feature object.
Looks up and returns a Feature object that provides an interface to work with an existing feature definition in the catalog.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
table
|
str | Table
|
The table containing the feature, either as name or Table object. |
required |
feature_name
|
str
|
Name of the feature to look up. |
required |
Returns:
Name | Type | Description |
---|---|---|
Feature |
Feature
|
An object representing the feature and its implementation. |
Raises:
Type | Description |
---|---|
DerivaMLException
|
If the feature doesn't exist in the specified table. |
Example
feature = ml.lookup_feature("samples", "expression_level") print(feature.feature_name) 'expression_level'
Source code in src/deriva_ml/core/base.py
829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 |
|
lookup_term
lookup_term(
table: str | Table, term_name: str
) -> VocabularyTerm
Finds a term in a vocabulary table.
Searches for a term in the specified vocabulary table, matching either the primary name or any of its synonyms.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
table
|
str | Table
|
Vocabulary table to search in (name or Table object). |
required |
term_name
|
str
|
Name or synonym of the term to find. |
required |
Returns:
Name | Type | Description |
---|---|---|
VocabularyTerm |
VocabularyTerm
|
The matching vocabulary term. |
Raises:
Type | Description |
---|---|
DerivaMLVocabularyException
|
If the table is not a vocabulary table, or term is not found. |
Examples:
Look up by primary name: >>> term = ml.lookup_term("tissue_types", "epithelial") >>> print(term.description)
Look up by synonym: >>> term = ml.lookup_term("tissue_types", "epithelium")
Source code in src/deriva_ml/core/base.py
957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 |
|
lookup_workflow
lookup_workflow(
url_or_checksum: str,
) -> RID | None
Finds a workflow by URL.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
url_or_checksum
|
str
|
URL or checksum of the workflow. |
required |
Returns: RID: Resource Identifier of the workflow if found, None otherwise.
Example
rid = ml.lookup_workflow("https://github.com/org/repo/workflow.py") if rid: ... print(f"Found workflow: {rid}")
Source code in src/deriva_ml/core/base.py
1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 |
|
resolve_rid
resolve_rid(
rid: RID,
) -> ResolveRidResult
Resolves RID to catalog location.
Looks up a RID and returns information about where it exists in the catalog, including schema, table, and column metadata.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
rid
|
RID
|
Resource Identifier to resolve. |
required |
Returns:
Name | Type | Description |
---|---|---|
ResolveRidResult |
ResolveRidResult
|
Named tuple containing: - schema: Schema name - table: Table name - columns: Column definitions - datapath: Path builder for accessing the entity |
Raises:
Type | Description |
---|---|
DerivaMLException
|
If RID doesn't exist in catalog. |
Examples:
>>> result = ml.resolve_rid("1-abc123")
>>> print(f"Found in {result.schema}.{result.table}")
>>> data = result.datapath.entities().fetch()
Source code in src/deriva_ml/core/base.py
422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 |
|
restore_execution
restore_execution(
execution_rid: RID | None = None,
) -> Execution
Restores a previous execution.
Given an execution RID, retrieves the execution configuration and restores the local compute environment. This routine has a number of side effects.
-
The datasets specified in the configuration are downloaded and placed in the cache-dir. If a version is not specified in the configuration, then a new minor version number is created for the dataset and downloaded.
-
If any execution assets are provided in the configuration, they are downloaded and placed in the working directory.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
execution_rid
|
RID | None
|
Resource Identifier (RID) of the execution to restore. |
None
|
Returns:
Name | Type | Description |
---|---|---|
Execution |
Execution
|
An execution object representing the restored execution environment. |
Raises:
Type | Description |
---|---|
DerivaMLException
|
If execution_rid is not valid or execution cannot be restored. |
Example
execution = ml.restore_execution("1-abc123")
Source code in src/deriva_ml/core/base.py
1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 |
|
retrieve_rid
retrieve_rid(
rid: RID,
) -> dict[str, Any]
Retrieves complete record for RID.
Fetches all column values for the entity identified by the RID.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
rid
|
RID
|
Resource Identifier of the record to retrieve. |
required |
Returns:
Type | Description |
---|---|
dict[str, Any]
|
dict[str, Any]: Dictionary containing all column values for the entity. |
Raises:
Type | Description |
---|---|
DerivaMLException
|
If the RID doesn't exist in the catalog. |
Example
record = ml.retrieve_rid("1-abc123") print(f"Name: {record['name']}, Created: {record['creation_date']}")
Source code in src/deriva_ml/core/base.py
452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 |
|
table_path
table_path(table: str | Table) -> Path
Returns a local filesystem path for table CSV files.
Generates a standardized path where CSV files should be placed when preparing to upload data to a table. The path follows the project's directory structure conventions.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
table
|
str | Table
|
Name of the table or Table object to get the path for. |
required |
Returns:
Name | Type | Description |
---|---|---|
Path |
Path
|
Filesystem path where the CSV file should be placed. |
Example
path = ml.table_path("experiment_results") df.to_csv(path) # Save data for upload
Source code in src/deriva_ml/core/base.py
265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 |
|
user_list
user_list() -> List[Dict[str, str]]
Returns catalog user list.
Retrieves basic information about all users who have access to the catalog, including their identifiers and full names.
Returns:
Type | Description |
---|---|
List[Dict[str, str]]
|
List[Dict[str, str]]: List of user information dictionaries, each containing: - 'ID': User identifier - 'Full_Name': User's full name |
Examples:
>>> users = ml.user_list()
>>> for user in users:
... print(f"{user['Full_Name']} ({user['ID']})")
Source code in src/deriva_ml/core/base.py
401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 |
|
DerivaMLException
Bases: Exception
Exception class specific to DerivaML module.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
msg
|
str
|
Optional message for the exception. |
''
|
Source code in src/deriva_ml/core/exceptions.py
6 7 8 9 10 11 12 13 14 15 |
|
DerivaMLInvalidTerm
Bases: DerivaMLException
Exception class for invalid terms in DerivaML controlled vocabulary.
Source code in src/deriva_ml/core/exceptions.py
18 19 20 21 22 |
|
__init__
__init__(
vocabulary,
term: str,
msg: str = "Term doesn't exist",
)
Exception indicating undefined term type
Source code in src/deriva_ml/core/exceptions.py
20 21 22 |
|
DerivaMLTableTypeError
Bases: DerivaMLException
RID for table is not of correct type.
Source code in src/deriva_ml/core/exceptions.py
24 25 26 27 28 |
|
__init__
__init__(table_type, table: str)
Exception indicating undefined term type
Source code in src/deriva_ml/core/exceptions.py
26 27 28 |
|
ExecAssetType
Bases: BaseStrEnum
Execution asset type identifiers.
Defines the types of assets that can be produced during an execution.
Attributes:
Name | Type | Description |
---|---|---|
input_file |
str
|
Input file used by the execution. |
output_file |
str
|
Output file produced by the execution. |
notebook_output |
str
|
Jupyter notebook output from the execution. |
Source code in src/deriva_ml/core/enums.py
208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 |
|
ExecMetadataType
Bases: BaseStrEnum
Execution metadata type identifiers.
Defines the types of metadata that can be associated with an execution.
Attributes:
Name | Type | Description |
---|---|---|
execution_config |
str
|
Execution configuration data. |
runtime_env |
str
|
Runtime environment information. |
Source code in src/deriva_ml/core/enums.py
194 195 196 197 198 199 200 201 202 203 204 205 |
|
FileSpec
Bases: BaseModel
An entry into the File table
Attributes:
Name | Type | Description |
---|---|---|
url |
str
|
The File url to the url. |
description |
str | None
|
The description of the file. |
md5 |
str
|
The MD5 hash of the file. |
length |
int
|
The length of the file in bytes. |
file_types |
conlist(str) | None
|
A list of file types. Each files_type should be a defined term in MLVocab.file_type vocabulary. |
Source code in src/deriva_ml/core/filespec.py
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 |
|
create_filespecs
classmethod
create_filespecs(
path: Path | str,
description: str,
file_types: list[str]
| Callable[[Path], list[str]]
| None = None,
) -> Generator[FileSpec, None, None]
Given a file or directory, generate the sequence of corresponding FileSpecs suitable to create a File table.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
Path | str
|
Path to the file or directory. |
required |
description
|
str
|
The description of the file(s) |
required |
file_types
|
list[str] | Callable[[Path], list[str]] | None
|
A list of file types or a function that takes a file path and returns a list of file types. |
None
|
Returns:
Type | Description |
---|---|
None
|
An iterable of FileSpecs for each file in the directory. |
Source code in src/deriva_ml/core/filespec.py
60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 |
|
read_filespec
staticmethod
read_filespec(
path: Path | str,
) -> Generator[FileSpec, None, None]
Get FileSpecs from a JSON lines file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path
|
Path | str
|
Path to the .jsonl file (string or Path). |
required |
Yields:
Type | Description |
---|---|
FileSpec
|
A FileSpec object. |
Source code in src/deriva_ml/core/filespec.py
94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 |
|
validate_file_url
classmethod
validate_file_url(url: str) -> str
Examine the provided URL. If it's a local path, convert it into a tag URL.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
url
|
str
|
The URL to validate and potentially convert |
required |
Returns:
Type | Description |
---|---|
str
|
The validated/converted URL |
Raises:
Type | Description |
---|---|
ValidationError
|
If the URL is not a file URL |
Source code in src/deriva_ml/core/filespec.py
35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
|
FileUploadState
Bases: BaseModel
Tracks the state and result of a file upload operation.
Attributes:
Name | Type | Description |
---|---|---|
state |
UploadState
|
Current state of the upload (success, failed, etc.). |
status |
str
|
Detailed status message. |
result |
Any
|
Upload result data, if any. |
rid |
RID | None
|
Resource identifier of the uploaded file, if successful. |
Source code in src/deriva_ml/core/ermrest.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
|
MLAsset
Bases: BaseStrEnum
Asset type identifiers.
Defines the types of assets that can be associated with executions.
Attributes:
Name | Type | Description |
---|---|---|
execution_metadata |
str
|
Metadata about an execution. |
execution_asset |
str
|
Asset produced by an execution. |
Source code in src/deriva_ml/core/enums.py
169 170 171 172 173 174 175 176 177 178 179 180 |
|
MLVocab
Bases: BaseStrEnum
Controlled vocabulary type identifiers.
Defines the names of controlled vocabulary tables used in DerivaML for various types of entities and attributes.
Attributes:
Name | Type | Description |
---|---|---|
dataset_type |
str
|
Dataset classification vocabulary. |
workflow_type |
str
|
Workflow classification vocabulary. |
asset_type |
str
|
Asset classification vocabulary. |
asset_role |
str
|
Asset role classification vocabulary. |
Source code in src/deriva_ml/core/enums.py
149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 |
|
TableDefinition
Bases: BaseModel
Defines a complete table structure in ERMrest.
Provides a Pydantic model for defining tables with their columns, keys, and relationships. Maps to deriva_py's Table.define functionality.
Attributes:
Name | Type | Description |
---|---|---|
name |
str
|
Name of the table. |
column_defs |
Iterable[ColumnDefinition]
|
Column definitions. |
key_defs |
Iterable[KeyDefinition]
|
Key constraint definitions. |
fkey_defs |
Iterable[ForeignKeyDefinition]
|
Foreign key relationship definitions. |
comment |
str | None
|
Description of the table's purpose. |
acls |
dict
|
Access control lists. |
acl_bindings |
dict
|
Dynamic access control bindings. |
annotations |
dict
|
Additional metadata annotations. |
Example
table = TableDefinition( ... name="experiment", ... column_defs=[ ... ColumnDefinition(name="id", type=BuiltinTypes.text), ... ColumnDefinition(name="date", type=BuiltinTypes.date) ... ], ... comment="Experimental data records" ... )
Source code in src/deriva_ml/core/ermrest.py
242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 |
|
UploadState
Bases: Enum
File upload operation states.
Represents the various states a file upload operation can be in, from initiation to completion.
Attributes:
Name | Type | Description |
---|---|---|
success |
int
|
Upload completed successfully. |
failed |
int
|
Upload failed. |
pending |
int
|
Upload is queued. |
running |
int
|
Upload is in progress. |
paused |
int
|
Upload is temporarily paused. |
aborted |
int
|
Upload was aborted. |
cancelled |
int
|
Upload was cancelled. |
timeout |
int
|
Upload timed out. |
Source code in src/deriva_ml/core/enums.py
38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
|