EPICS Network Protocols in Containers#
When EPICS IOCs run in containers, Channel Access or PVAcess protocols must be made available to clients. There are some challenges around this that are discussed in this page.
Approaches to Network Protocols#
To get clients and servers connected we can use 3 approaches:
Run IOC containers in Host Network:
This is the approach that DLS has adopted for IOCs running in Kubernetes.
The container uses the host network stack.
This looks identical to running the IOC on the host machine as far as clients are concerned.
See a discussion of the reasoning here: Channel Access and Other Protocols
This reduces the isolation of the container from the host so additional security measures may be needed.
Use Port Mapping:
This approach is used in the developer containers defined by ioc-template
The container runs in a container network.
The necessary ports are mapped from the host network to the container network.
VSCode can do this port mapping automatically when it detects processes binding to ports.
This approach is good for local development and running tutorials as the mapping can be made to localhost only and PVs can be isolated to the developer’s machine.
Run the clients in the same container network as the IOCs:
This approach is used in example-services.
example-services runs a PVA and a CA gateway in the same container network as the IOCs.
The gateways use Port Mapping to give access to their own clients.
The gateways can use any ports and UDP broadcast to communicate with the IOCs.
If your client is a GUI app, like phoebus, then this may not work as it can be difficult to do X11 forwarding into a rootless container network.
General Observations#
Using Host Network or the same container network for client and host is compatible with both PVA and CA protocols.
For podman and docker networks this is true even for UDP broadcast.
For the majority of Kubernetes CNI’s the broadcast does not work across pods. It is quite possible that broadcast within pods would work as this is equivalent to ‘same container network’. However this would make management of large numbers of IOCs far more of a manual task.
Channel Access#
Specification https://docs.epics-controls.org/en/latest/internal/ca_protocol.html.
Experiments with Channel Access servers running in containers reveal:
Port Mapping works for CA including UDP broadcast.
But UDP broadcast or unicast only works if the container does not remap the port to a different number inside the container.
Using EPICS_CA_NAME_SERVERS always works with Port Mapping
PV Access#
Specification https://docs.epics-controls.org/en/latest/pv-access/Protocol-Messages.html.
Experimentation with PV Access servers running in containers reveal:
Port Mapping for PVA using UDP always fails because PVA servers open a new random port for each circuit and this is not NAT friendly.
Using EPICS_PVA_NAME_SERVERS always works with Port Mapping
But the client and server must both be PVXS
To talk to a non PVXS server, a pvagw running in the same container network may be used.
Code#
The following bash scripts can be run to test the assertions made above:
#!/bin/bash
# demo of exposing Channel Access outside of a container
# caRepeater:
#
# note that these experiments ignore the CA_REPEATER_PORT. Typically
# IOCs in containers should also expose 5065 for the CA repeater.
# Because only the first IOC needs to start caRepeater, and that one process
# binds to 5065, it turns out that caRepeater continues to work as expected.
# (caRepeater can go down if the IOC that started it goes down, but it will get
# restarted by the next IOC startup.)
cmd='-dit --rm --name test ghcr.io/epics-containers/ioc-template-example-runtime:4.1.0'
check () {
podman run $args $env $ports $cmd > /dev/null
podman logs -f test | grep -q -m 1 "iocInit"
if caget EXAMPLE:IBEK:SUM &>/dev/null; then
echo "CA Success"
else
echo "CA Failure"
fi
podman stop test &> /dev/null; sleep 1
echo ---
}
(
echo no ports, network host, broadcast
ports=
args="--network host"
check
# the default sledgehammer approach works like native IOCs
)
# I guess broadcasts don't go to the loopback
(
echo 5064, broadcast: FAILURE
ports="-p 5064:5064 -p 5064:5064/udp"
check
)
(
echo 5064 no UDP, broadcast: FAILURE
ports="-p 5064:5064"
check
)
(
echo 5064, unicast
export EPICS_CA_ADDR_LIST="localhost"
ports="-p 5064:5064 -p 5064:5064/udp"
check
)
(
echo 5064 no UDP, unicast: FAILURE
export EPICS_CA_ADDR_LIST="localhost"
ports="-p 5064:5064"
check
# EPICS_CA_ADDR_LIST uses UDP Unicast
)
# NOTE: binding to localhost means that only the local host clients
# can see the IOC. This is useful for testing without exposing the IOC
# on the whole subnet.
(
echo 5064, broadcast, localhost: FAILURE
ports="-p 127.0.0.1:5064:5064 -p 127.0.0.1:5064:5064/udp"
check
# why does this fail? - I guess broadcasts do not go to localhost
)
(
echo 5064, unicast, localhost
export EPICS_CA_ADDR_LIST="localhost"
ports="-p 127.0.0.1:5064:5064 -p 127.0.0.1:5064:5064/udp"
check
)
(
echo 8064, broadcast
export EPICS_CA_SERVER_PORT=8064
env="-e EPICS_CA_SERVER_PORT=8064"
ports="-p 8064:8064 -p 8064:8064/udp"
check
)
(
echo 8064, unicast, localhost
export EPICS_CA_ADDR_LIST="localhost" EPICS_CA_SERVER_PORT=8064
env="-e EPICS_CA_SERVER_PORT=8064"
ports="-p 127.0.0.1:8064:8064 -p 127.0.0.1:8064:8064/udp"
check
)
# remapping the ports does not work!
(
echo 8064:5064, broadcast: FAILURE
export EPICS_CA_SERVER_PORT=8064
ports="-p 8064:5064 -p 8064:5064/udp"
check
)
(
echo 8064:5064, unicast, localhost: FAILURE
export EPICS_CA_ADDR_LIST="localhost" EPICS_CA_SERVER_PORT=8064
ports="-p 127.0.0.1:8064:5064 -p 127.0.0.1:8064:5064/udp"
check
)
(
echo 5064 no UDP, NAME_SERVER, localhost
export EPICS_CA_NAME_SERVERS="localhost:5064"
ports="-p 127.0.0.1:5064:5064"
check
)
(
echo 8064:5064 no UDP, NAME_SERVER, localhost
export EPICS_CA_NAME_SERVERS="localhost:8064"
ports="-p 127.0.0.1:8064:5064"
check
)
#!/bin/bash
# demo of exposing PV Access outside of a container
# requires a venv with p4p installed
pvget='
from p4p.client.thread import Context
Context("pva").get("EXAMPLE:IBEK:SUM", timeout=0.5)
'
cmd='-dit --rm --name test ghcr.io/epics-containers/ioc-template-example-runtime:4.1.0'
check () {
podman run $args $env $ports $cmd > /dev/null
podman logs -f test | grep -q -m 1 "iocInit"
if python -c "$pvget" 2>/dev/null; then
echo "PVA Success"
else
echo "PVA Failure"
fi
podman stop test &> /dev/null
echo ---
}
(
echo no ports, network host, broadcast
ports=
args="--network host"
check
# the default sledgehammer approach works like native IOCs
)
# PVA fails for broadcast and unicast because the client creates a new random
# port for the server to make the TCP circuit but that is not NAT friendly.
(
echo 5075, broadcast: FAILURE
ports="-p 5075:5075 -p 5075:5075/udp"
check
)
(
echo 5075, unicast: FAILURE
export EPICS_PVA_ADDR_LIST="localhost"
ports="-p 5075:5075 -p 5075:5075/udp"
check
)
# NAME SERVER uses a single TCP connection and is compatible with NAT
#
# IMPORTANT - for this to work, both ends of the conversation must be pvxs.
# Thus to talk to ADPvaPlugin requires a pvagw running in the same container
# network to proxy the traffic
(
echo 5075, NAME SERVER
export EPICS_PVA_NAME_SERVERS="localhost:5075"
ports="-p 5075:5075"
check
)
(
echo 8057:5075, NAME SERVER
export EPICS_PVA_NAME_SERVERS="localhost:8075"
ports="-p 8075:5075"
check
)