#### Neural Networks (Pytorch)

## Adversarial Attacks on CIFAR-10

Adversarial attacks on neural networks were developed by Goodfellow et al. as techniques to fool models through perturbations applied to input data. As a result, the accuracy of the trained model can drop to almost zero. At the same time, perturbed inputs, e.g. images, are imperceptibly different to humans or AI solutions for self-driving cars, the military, or medicine. In this post, I set up a trained model on CIFAR-10 and demonstrate how applied perturbations shift classifications to the false negatives, which can be easily visualised through confusion matrices.

#### Option Pricing (Python)

## Explicit Method of Lines for European Option

Method of Lines (MOL) is a semi-analytical approach to solve PDEs where space dimension is discretised but time stays continuous. As a result, there is a system of ODEs coupled by the matrix operator, which is solved along the time. MOL seems to be good to price options because it decreases the error of estimation. The method is accurate, relatively fast and in general reliable. Nevertheless, the stability of the explicit scheme still depends upon space-grid discretisation.

#### Option Pricing (Python)

## Implicit Method of Lines for European Option

In this post, I present MOL for the EU option. This time it is the implicit method, which is unconditionally stable and allows for more accurate price approximation.