Basics

Basics

Setting up the problem

NeuralQuantum's aim is to compute the steady state of an Open Quantum System. As such, the first step must be defining the elements that make up the lindbladaian, namely the Hilbert space, the Hamiltonian and the Loss operators.

While it is possible to specify an arbitrary quantum system, the easiest way is to use one of the alredy-implemented systems.

using NeuralQuantum

Nspins = 5 # The number of spins in the system

# Compute the Hamiltonian and the list of jump operators.
sys = quantum_ising_system(Nspins, V=0.2, g=1.0, gamma=0.1, PBC=true)

Next, we need to define the quantity that we wish to minimize variationally to find the steady state. This will be $\langle\rho|\mathcal{L}^\dagger\mathcal{L }|\rho\rangle$. I call this quantity the problem.

prob = LdagLProblem(sys, compute_ss=false)

Choosing the Ansatz

The next step consists in creating the network-based ansatz for the density matrix. In this example we will use a 64-bit floating point precision translational invariant Neural Density Matrix, with $N_\text{spins}$ spins in the visible layer, 3 features in the hidden layer (~ $3N_\text{spins}$ spins) and 3 features in the ancilla. To increase the expressive power of the network, one may increase freely the number of features. For a complete list of all possible ansatzes refer to section Networks.

net_initial = TINDM{Float64}(Nspins, 3, 3)

When you create a network, it has all it's weights initialized to zero. It is usually a good idea to initialize them to some gaussian distribution. of width $~0.1$. Often, in the machine learning community, they are initialized according to a uniform distribution between $[-\frac{1}{\sqrt{N}}, \frac{1}{\sqrt{N}}]$ [ref?].

# Load the package with random numbers
using Random

# For reproducible results, choose a seed
rng = MersenneTwister(12345)

# Initialize the weights according to a gaussian distribution of width 1
randn!(rng, net_initial.weights)

# Set the width to 0.3
net_initial.weights .*= 0.3

Solving for the steady state

Having specified the quantity to minimize and the ansatz, we only need to choose the sampler and the optimization scheme.

The sampler is the algorithm that selects the states in the hilbert space to be summed over. Options are Exact, which performs an exact sum over all possible vectors in the hilbert space, ExactDistrib, which samples a certain number of elements according to the exact distribution, and Metropolis, which performs a Markov Chain according to the Metropolis rule. For this example we will chose an exact sum over all the hilbert space.

The Optimizer is instead the algorithm that, given the gradient, updates the variational weights of the ansatz. In this example we will use a simple gradient descent scheme with step size lr=0.01. Many more optimizers are described in Sec. Optimizers.

# Initialize an Exact Sum Sampler
sampler = Exact()

# Initialize a GD optimizer with learning rate (step size) 0.01
optim = GD(lr=0.1)

# Optimize the weights of the neural network for 100 iterations
sol = solve(net_initial, prob, sampling_alg=sampler, optimizer=optim, max_iter=100)

After running the solve command you should see in the temrinal a list of the loss function updating over time, and how much it varies across iterations, such as this:

  iter -> E= <rho| LdagL |rho>/<rho|rho>     → ΔE = variation since last iter
     3 -> E=(+7.199784e-01 +im+8.558472e-18) → ΔE = -1.20e-01
     4 -> E=(+6.173014e-01 +im-5.918733e-18) → ΔE = -1.03e-01
     5 -> E=(+4.952037e-01 +im-5.480910e-18) → ΔE = -1.22e-01

If the optimization goes well, ΔE should almost always be negative as to converge towards the minimum.

## The solution object The solve command returns a structure holding several important data:

To access, for example, the value of $\langle\rho|\mathcal{L}^\dagger\mathcal{L }|\rho\rangle$ along the optimization, one can simply do

sol[:Energy].iterations
sol[:Energy].values

where iterations is a vector containing the information at which iterations the corresponding value was logged.

Logging observables during the evolution

It can be usefull to store also some other observables during the evolution. To do so, one can pass additional keyword-arguments to the solve command:

[(:obs1, obs1), (:obs2, obs2)]

computing the fidelity is a very computationally demanding task, and as such you should not usually use this feature

For example, to store the observables $m_x$, $m_y$ and $m_z$ every 2 optimization step, we can do the following:

# Compute the operator for the average magnetization
mx, my, mz = magnetization([:x, :y, :z], sys)./Nspins

# Create the list of observables and symbols
obs = [(:mx, mx), (:my, my), (:mz, mz)];

# Create the logger
sol = solve(net_initial, prob, max_iter=100, observables=obs, save_at=2)

The magnetization([::Symbols], ::System) function returns the total magnetization operator along the specified axis for the given system. For more information on this function refer to Systems.

Using the Logger

Once you have solution, you can plot it with:

using Plots

pl_E = plot(sol[:Energy])
pl_x = plot(sol[:mx])
pl_y = plot(sol[:my])
pl_z = plot(sol[:mz])
pl(pl_E, pl_x, pl_y, pl_z)

Quantities inside the logger are stored as :

sol[:OBSERVABLE].iterations ->
sol[:OBSERVABLE].values     ->

You can also concatenate several different solutions. This will return a MVHistory object holding the evolution of all quantities of interest and observables, but it won't hold anymore information on the weights.

Summary

using NeuralQuantum, Random

Nspins = 5 # The number of spins in the system

# Compute the Hamiltonian and the list of jump operators.
sys = quantum_ising_system(Nspins, V=0.2, g=1.0, gamma=0.1, PBC=true)

# Compute the operator for the average magnetization
mx, my, mz = magnetization([:x, :y, :z], sys)./Nspins

# Create the list of observables and symbols
obs = [(:mx, mx), (:my, my), (:mz, mz)];

# Define the quantity to be minimised (the loss function)
prob = LdagLProblem(sys, compute_ss=false)

# Translational-Invariant Neural Density Matrix with 3 hidden, 3 auxiliary features
# and Binary activation function.
net_initial = TINDMBinaryExact{Float64}(Nspins, 3, 3)

# For reproducible results, choose a seed
rng = MersenneTwister(12345)

# Initialize the weights according to a gaussian distribution of width 1
randn!(rng, net_initial.weights)

# Set the width to 0.3
net.weights .*= 0.3

# Initialize an Exact Sum Sampler
sampler = Exact()

# Initialize a GD optimizer with learning rate (step size) 0.01
optim = GD(lr=0.1)

# Optimize the weights of the neural network for 100 iterations
net_optimized = solve(net_initial, prob, sampling_alg=sampler, optimizer=optim,
                      max_iter=100, observables=obs, save_at=1)