LIGO Document T2400258-v2
- The current locking scheme at the LIGO 40-Meter Interferometer uses closed control
loops to guard against noise and acquire lock. However, this linear control method is inefficient
and time-consuming for a nonlinear system. In this paper, we present reinforcement learning
as an alternative approach for lock acquisition, known as intelligent control. We first develop
a neural network simulation of FINESSE 3, an interferometer modeling software, achieving
a significant increase in simulation speed at the expense of some accuracy. This simulation is
evolved in time with the noise forces present in the 40-meter laboratory. We then train a Proximal
Policy Optimization (PPO) agent to acquire lock in the simulated environment. The training speed is too slow with Finesse so we replace it with a Neural network that duplicates the finesse data at a facter pace. My results
demonstrate that the training speed of the RL agent was increase by approximately 30 times with the Neural network. This work is particularly
relevant as future upgrades in laser power and interferometer complexity are expected to increase
the frequency of lock loss. Intelligent control can reduce detector downtime, and our research
lays the groundwork for a prototype implementation at the 40m interferometer
DCC Version 3.5.1, contact
DCC Help