Tag: Python
A Fully connected neural network in JAX
posted on 2020-12-25T14:48:19ZJAX is the hot new ML-tool on the block. I’m currently trying to get acquinted with it. The first thing I usually do to get to know a tool like this is to make a simple neural network to solve the MNIST digit recognition task. So here we go…
Creating a Pytorch solver for sparse linear systems
posted on 2020-10-17T11:04:18ZNot so long ago, I implemented a wrapper library in PyTorch to solve sparse linear systems on the CPU using the SuiteSparse routines under the hood. My goal is to eventually integrate this sparse solver into my photonic circuit simulator, Photontorch. However, for now, I thought it would be instructive to go over the steps I took to implement both the forward pass and the backward pass of such a custom PyTorch function.
Towards a loss function for YOLO
posted on 2020-09-01T09:57:11ZWriting the second part of the YOLO series took a lot longer than I care to admit… but, it’s finally here! In this part we’ll go over the definition of a loss function to train the YOLO architecture.
Building Tiny YOLO from scratch using PyTorch
posted on 2020-04-23T12:09:34Z · last modified on 2020-09-01T09:57:11ZIn this series we’ll go over YOLO (You Only Look Once), a state-of the art object detection deep neural network. In this blog post, we’ll build the simplest YOLO network: Tiny YOLO v2. This stripped down version of YOLO will yield the easiest introduction to the neural network structure of YOLO, while still providing close to state-of-the-art performance.
Reinforcement learning from the ground up | part 2: deep Q learning on tic-tac-toe.
posted on 2019-12-22T14:53:34Z · last modified on 2020-06-01T11:36:56ZPreviously, we saw that reinforcement learning worked quite well on tic-tac-toe. However, there’s something unsatisfying about working with a Q-table storing all the possible states of the game. It feels like the Agent simply memorizes each state of the game and acts according to some memorized rules obtained by its huge amount of experience (remember that the Agent played 10,000,000 games during training). In this second part of the reinforcement learning series, we’ll swap out the Q table for a neural network.
Reinforcement learning from the ground up | part 1: tic-tac-toe.
posted on 2019-11-29T12:12:07Z · last modified on 2020-06-01T11:36:56ZAs a first example to reinforcement learning, we’ll make our computer learn by itself how to play tic-tac-toe. As one of the most simple 2 player games, tic-tac-toe is ideal to get started with reinforcement learning, while still being more interesting that learning to play a single player game.