EEG libraries in Python, a landscape analysis
What are the most well-known and tested Python libraries for reading EEG signals, and how do they compare in terms of adoption, ecosystem, features, and maintenance? Let’s find out.
Effective Attention Scoring for Padded Inputs
Commonly, attention layers will learn a weight for every token of a sequential input, so that $n_{attention-weights} = n_{input-tokens}$. Although it is reasonable to allow the model to learn how much each token in a sequence should contribute to model prediction, padding messes up this assumption. Let’s find out why.
Accelerated Linear Algebra Libraries (MKL and OpenBLAS)
Accelerated Linear Algebra Libraries, also mostly known as Basic Linear Algebra Subprograms (BLAS), are a set of low-level routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. The implementations are often optimized for speed for example by taking advantage of special floating point hardware such as vector registers or SIMD instructions. Using them can bring substantial performance benefits.
Deep Learning in production, eager model and graph execution
In computing, just-in-time (JIT) compilation (also dynamic translation or run-time compilations) is a way of executing computer code that involves compilation during execution of a program (at run time) rather than before execution.
Enable MathJax in your Jekyll page
Jekyll with mathematical annotation is tricky bussiness. If you’re having trouble, try following my recipe below.
Array Broadcasting, a visual explanation
The term broadcasting describes how NumPy treats arrays with different shapes during arithmetic operations. Subject to certain constraints, the smaller array is “broadcast” across the larger array so that they have compatible shapes.
Neural Networks as Universal Function Approximators
The universal approximation theorem states that a feed-forward neural network with a single hidden layer containing a finite number of neurons can approximate any continuous function (provided some assumptions on the activation function are met) to an arbitrary precision [1]. If the function jumps around or has large gaps, we won’t be able to approximate it.
Forward pass, an interactive breakdown
One of the most striking facts about neural networks is that it is guaranteed they can closely approximate any function $f(x)$.
subscribe via RSS