Commit Graph

12 Commits

Author SHA1 Message Date
d760c45baf feat: Add multi-GPU training and improve config/ignore
Add train_multigpu.py for distributed data parallel training.

Update train.py to save the training configuration to a JSON file.

Generalize .gitignore to exclude all *.pt checkpoint files.

Delete obsolete train_dpp.py file.
2025-10-17 14:09:34 +08:00
053f86f4da config: Add weight decay to training configuration
Adds a weight_decay parameter to the TrainConfig and applies it to the AdamW optimizer.
2025-10-17 13:47:37 +08:00
fe0304a96a feat: Save model with params in name and log losses 2025-10-17 10:44:17 +08:00
02d84a7eca refactor: Use AdamW optimizer and increase early stopping patience 2025-10-17 10:31:12 +08:00
cb7575a229 feat: Update model and training parameters
In `models.py`:
- Change temporal attention mask to be strictly causal (`<` instead of `<=`).
- Add self-attention for the first token in a sequence to prevent NaNs.

In `train.py`:
- Update hyperparameters:
  - `block_length`: 24 -> 48
  - `n_embd`: 256 -> 120
  - `n_layer`: 8 -> 12
  - `n_head`: 8 -> 12
2025-10-16 18:50:15 +08:00
eec406d79f update ignored events. 2025-10-16 17:10:01 +08:00
c7296381b8 Revert "feat: adapt train.py to multi-GPU environment"
This reverts commit b7aad7a774.
2025-10-16 16:23:38 +08:00
2b20299e36 Revert "fix: average loss for multi-GPU training"
This reverts commit 85502561ee.
2025-10-16 16:23:35 +08:00
85502561ee fix: average loss for multi-GPU training 2025-10-16 16:21:51 +08:00
b7aad7a774 feat: adapt train.py to multi-GPU environment 2025-10-16 16:16:15 +08:00
4181ead03a Refactor: Improve attention mechanism and early stopping
- Refactor the self-attention mechanism in `models.py` to use `nn.MultiheadAttention` for better performance and clarity.
- Disable early stopping check during warmup epochs in `train.py` to improve training stability.
2025-10-16 15:57:27 +08:00
589d4d0bd2 feat: Implement time-aware GPT-2 for patient event prediction
This commit introduces a complete framework for training a temporal GPT-2 model on sequential patient event data.

Key components include:

- `models.py`:
  - `TimeAwareGPT2`: A custom GPT-2 model that incorporates temporal information through a time-based causal attention mask and a sinusoidal age encoding for positional information.
  - `AgeSinusoidalEncoding`: A module for creating time-based positional embeddings.
  - `CombinedLoss`: A two-part loss function combining cross-entropy for event prediction and a survival loss for event timing.

- `utils.py`:
  - `PatientEventDataset`: A PyTorch Dataset class to process, batch, and load patient event sequences, including imputation of "no event" gaps and padding/truncation.

- `train.py`:
  - A comprehensive training script that initializes the model, data loaders, and loss function.
  - Implements a training loop with a cosine annealing learning rate scheduler, validation, and early stopping based on validation loss.

- `prepare_data.py`:
  - Script for preprocessing raw UK Biobank data into a format suitable for the model.

- `GEMINI.md`:
  - Project documentation outlining the structure, coding style, and framework.
2025-10-16 14:21:36 +08:00