COCOData.from_dataset#
- classmethod COCOData.from_dataset(data: Dataset, batch_size: int = 64, shuffle: bool = True, num_workers: int = 0, pin_memory: bool = True, collate_fn: Optional[Callable] = None, num_classes: Optional[int] = None, label_map: Optional[Dict[int, str]] = None, transform_field: Optional[str] = 'transforms') VD [source]#
Create VisionData instance from a Dataset instance.
- Parameters
- dataDataset
instance of a Dataset.
- batch_size: int, default 64
how many samples per batch to load.
- shufflebool, default True:
set to
True
to have the data reshuffled at every epoch.- num_workers int, default 0:
how many subprocesses to use for data loading.
0
means that the data will be loaded in the main process.- pin_memory bool, default True
If
True
, the data loader will copy Tensors into CUDA pinned memory before returning them.- collate_fnOptional[Callable]
merges a list of samples to form a mini-batch of Tensor(s).
- num_classesOptional[int], default None
Number of classes in the dataset. If not provided, will be inferred from the dataset.
- label_mapOptional[Dict[int, str]], default None
A dictionary mapping class ids to their names.
- transform_fieldOptional[str], default: ‘transforms’
Name of transforms field in the dataset which holds transformations of both data and label.
- Returns
- VisionData