StandardScaler

In the lecture “Preprocessing for numerical features” the function StandarScaler was introduced. It “shifts and scales each feature individually so that they all have a 0-mean and a unit standard deviation.”
Does this mean that it should be used only for distributions that are close to a Gaussian?

As mentioned in the scikit-learn doc:

Standardization of datasets is a common requirement for many machine learning estimators implemented in scikit-learn; they might behave badly if the individual features do not more or less look like standard normally distributed data: Gaussian with zero mean and unit variance .

If your input data is very different from a gaussian there are some non linear transformations that you can use, in particular QuantileTransformer or PowerTransformer, see this for more details. There is also a RobustScaler that is similar to StandardScaler but uses a estimation of mean and variance robust to outliers, see this.

The “more and less look like standard normally distributed Gaussian” means that basically you need to try on your data and see whether this affects the statistical performance of your machine learning model or not.