Metadata-Version: 2.1
Name: robbytorch
Version: 0.1.3
Summary: Cool package for robust AI
Home-page: https://gitlab.com/piotr.wygocki/image_transfer_learning_tools/
Author: MIM Solutions
Author-email: maciej.satkiewicz@mim-solutions.pl
License: MIT
Project-URL: Bug Reports, https://gitlab.com/piotr.wygocki/image_transfer_learning_tools/-/issues
Keywords: pytorch,neural-networks,deep-learning,robust-learning,transfer-learning
Platform: UNKNOWN
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Topic :: Software Development :: Build Tools
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.5
Classifier: Programming Language :: Python :: 3.6
Requires-Python: >=3.6, <4
Description-Content-Type: text/markdown
License-File: LICENSE

# Installation

Robbytorch requires Pytorch, however it's not specified in the dependencies - we recommend installing Pytorch manually via conda and only later installing Robbytorch by pip.

Use your conda env or create a new one:

```
conda create --name <ENV NAME> python=3.8 pip
conda activate <ENV NAME>
```

Install [Pytorch](https://pytorch.org/). If you have older drivers for GPU you may want to require older version of CUDA, i.e.:

```
conda install pytorch torchvision torchaudio cudatoolkit=10.1 -c pytorch -c conda-forge
```

or even [older Pytorch version](https://pytorch.org/get-started/previous-versions/):

```
conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.1 -c pytorch
```

Then run:

```
pip install robbytorch
```

# Usage

See jupyter notebooks in `ipython/` for complete examples. For step-by-step introduction continue reading this file.

## Prepare Dataset

Place your data into chosen root directory, i.e. `"/dysk1/approx/robby"`.

You can subclass `robbytorch.datasets.DictDateset` and implement two methods - for more info please read the docstring for that class. Here's an example implementation:

```python
import torch
from torchvision import transforms
from torch.utils.data import DataLoader

from robbytorch.datasets import DictDataset


class CreationsDataset(DictDataset):

    def load_data(self, idx):
        file_name = f"{self.metadata.iloc[idx]['creation_id']}.png"
        return self.load_image(file_name)
    
    def load_target_dict(self, idx):
        record = self.metadata.iloc[idx].to_dict()
        
        return {col: torch.tensor(record[col]).float() 
                for col in ['label', 'CR']
               }

transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Resize((224, 224))
])
data_root = "/dysk1/approx/robby"
metadata = pd.DataFrame(data=[(3141, 0, 0.01), (2137, 1, 0.012)], columns=["creation_id", "label", "CR"])
dataset = CreationsDataset(data_root, metadata, transform=transform)
dataloader = DataLoader(dataset, batch_size=128, shuffle=False, num_workers=2)
```

Now whenever you iter through the `dataloader` you get a dict of batched tensors (with `.shape[0] == 128`):

```python
{
    "data": batched_tensor_data,
    "label": batched_tensor_label,
    "CR": batched_CR_of_tensors
}
```

You can use this structure however you like during training/evaluation.

## TODO - further eplanations:

- trenowanie: 3x forward
- configs z lib2
- Dodawanie auxiliary losses za pomocą magic hooks
- Writers objaśnić, livelossplot i mlflow
- wczytywanie robust networks
- opis utilities - notebook, visualization

