Stage 3: Physical Radio

We are now ready to replace the emulated RAN with a physical small cell radio and real UEs. Unlike earlier stages of Aether OnRamp that worked exclusively with 5G, this stage allows either 4G or 5G small cells (but not both simultaneously). The following instructions are written for the 5G scenario, but you can substitute “4G” for “5G” in every command or file name. (Exceptions to that rule are explicitly noted.)

In addition to the physical server used in previous stages, we now assume that server and the external radio are connected to the same L2 network and share an IP subnet. This is not a hard requirement for all deployments, but it does simplify communication between the radio and the UPF running within Kubernetes on the server. Take note of the network interface on your server that provides connectivity to the radio, for example by typing:

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp193s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 2c:f0:5d:f2:d8:21 brd ff:ff:ff:ff:ff:ff
    inet 10.76.28.113/24 metric 100 brd 10.76.28.255 scope global ens3
       valid_lft forever preferred_lft forever
    inet6 fe80::2ef0:5dff:fef2:d821/64 scope link
       valid_lft forever preferred_lft forever

In what will serve as a running example throughout this section, the interface is enp193s0f0 with IP address 10.76.28.113.

Local Blueprint

Unlike earlier stages that took advantage of canned configurations, adding a physical base station means you need to account for specifics of your local environment. Editing various configuration files is a necessary step in customizing a deployment, and so Aether OnRamp establishes a simple convention to help manage that process.

Specifically, the blueprints directory currently defines five distinct ways to configure and deploy Aether:

  • release-2.0: Deploys Aether v2.0 in a single server (or VM), running an emulated RAN.

  • release-2.1: Deploys Aether v2.1 in a single server (or VM), running an emulated RAN.

  • latest: Deploys the latest version of Aether in a single server (or VM), running an emulated RAN.

  • 4g-radio: Deploys the latest version of Aether in a single server (or VM), connected to a physical eNB.

  • 5g-radio: Deploys the latest version of Aether in a single server (or VM), connected to a physical gNB.

Up to this point, we have been using latest as our default blueprint, but for this stage, we will shift to the 5g-radio blueprint (or 4g-radio, as appropriate).

Each blueprint specifies three sets of parameters that define how Aether is configured and deployed: (1) a set of Makefile variables that customize the deployment process; (2) a set of Helm Charts that customize the Kubernetes workload that gets deployed; and (3) a set of value override (and similar) files that customize how the microservices in that workload are configured. All of these parameters are defined in the blueprint’s config file, so using the 5g-radio blueprint as an example:

$ cat blueprints/5g-radio/config
# Configuration for External 5G Radio (gNB) Blueprint

# Variables
ENABLE_RANSIM := false
LOCAL_CHARTS := false
DATA_IFACE := eth0

# For installing the Core
SD_CORE_CHART            := aether/sd-core

# For installing the ROC
AETHER_ROC_UMBRELLA_CHART := aether/aether-roc-umbrella
ATOMIX_CONTROLLER_CHART   := atomix/atomix-controller
ATOMIX_RAFT_STORAGE_CHART := atomix/atomix-raft-storage
ATOMIX_RUNTIME_CHART      := atomix/atomix-runtime --version 0.1.9  # v0.2.0 not working
ONOS_OPERATOR_CHART       := onosproject/onos-operator

# For installing monitoring
RANCHER_MONITORING_CRD_CHART := rancher/rancher-monitoring-crd
RANCHER_MONITORING_CHART     := rancher/rancher-monitoring

# Helm Value Overrides and other Config Files
ROC_VALUES     := $(BLUEPRINTDIR)/roc-values.yaml
ROC_5G_MODELS  := $(BLUEPRINTDIR)/roc-5g-models.json
5G_CORE_VALUES := $(BLUEPRINTDIR)/sd-core-5g-values.yaml
MONITORING_VALUES := $(BLUEPRINTDIR)/monitoring.yaml

As your deployment deviates more and more from the release—either to account for differences in your target computing environment or changes you make to the software being deployed—you can record these changes in these or other blueprints that you create. For the purpose of this section, we will simply edit files in the blueprints/5g-radio directory, but you may want to make your own local blueprint directory, copy these files into it, and make your changes there.

At this point, you need to make two edits. The first is to the DATA_IFACE variable in blueprints/5g-radio/config, changing it from eth0 to whatever name you noted earlier (e.g., enp193s0f0 in our running example). The second is to the default BLUEPRINT setting in MakefileVar.mk, changing it from latest to 5g-radio. Alternatively, you can modify that variable on a case-by-case basis; for example:

BLUEPRINT=5g-radio make net-prep

Going forward, you will be editing the yaml and json files in the 5g-radio blueprint, so we recommend familiarizing yourself with 5g-radio/sd-core-5g-values.yaml and 5g-radio/roc-5g-models.json (or their 4G counterparts).

Prepare UEs

5G-connected devices must have a SIM card, which you are responsible for creating and inserting. You will need a SIM card writer (these are readily available for purchase on Amazon) and a PLMN identifier constructed from a valid MCC/MNC pair. For our purposes, we use two different PLMN ids: 315010 constructed from MCC=315 (US) and MNC=010 (CBRS), and 00101 constructed from MCC=001 (TEST) and MNC=01 (TEST). You should use whatever values are appropriate for your local environment. You then assign an IMSI and two secret keys to each SIM card. Throughout this section, we use the following values as exemplars:

  • IMSI: each one is unique, matching pattern 315010********* (up to 15 digits)

  • OPc: 69d5c2eb2e2e624750541d3bbc692ba5

  • Key: 000102030405060708090a0b0c0d0e0f

Insert the SIM cards into whatever devices you plan to connect to Aether. Be aware that not all phones support the CBRS frequency bands that Aether uses. Aether is known to work with recent iPhones (11 and greater), Google Pixel phones (4 and greater) and OnePlus phones. CBRS may also be supported by recent phones from Samsung, LG Electronics and Motorola Mobility, but these have not been tested. Note that on each phone you will need to configure internet as the Access Point Name (APN). Another good option is to use a 5G dongle connected to a Raspberry Pi as a demonstration UE. This makes it easier to run diagnostic tests from the UE. For example, we have used APAL’s 5G dongle with Aether.

Finally, modify the subscribers block of the omec-sub-provision section in file 5g-radio/sd-core-5g-values.yaml to record the IMSI, OPc, and Key values configured onto your SIM cards. The block also defines a sequence number that is intended to thwart replay attacks. (As a reminder, these values go in 4g-radio/sd-core-4g-values.yaml if you are using a 4G small cell.) For example, the following code block adds IMSIs between 315010999912301 and 315010999912310:

subscribers:
- ueId-start: "315010999912301"
  ueId-end: "315010999912310"
  plmnId: "315010"
  opc: "69d5c2eb2e2e624750541d3bbc692ba5"
  key: "000102030405060708090a0b0c0d0e0f"
  sequenceNumber: 135

Further down in the same omec-sub-provision section you will find two other blocks that need to be edited. The first, device-groups, assigns IMSIs to Device Groups. You will need to reenter the individual IMSIs from the subscribers block that will be part of the device-group:

device-groups:
- name:  "5g-user-group1"
   imsis:
       - "315010999912301"
       - "315010999912302"
       - "315010999912303"

The second block, network-slices, sets various parameters associated with the Slices that connect device groups to applications. Here, you will need to reenter the PLMN information, with the other slice parameters remaining unchanged (for now):

plmn:
    mcc: "315"
    mnc: "010"

Aether supports multiple Device Groups and Slices, but the data entered here is purposely minimal; it’s just enough to bring up and debug an initial system. Over the lifetime of a running system, information about Device Groups and Slices (and the other abstractions they build upon) should be entered via the ROC, as described in Stage 4. When you get to that point, variable provision-network-slice should be set to false, causing the device-groups and network-slices blocks of sd-core-5g-values.yaml to be ignored. (The subscribers block is always required to configure SD-Core.)

Bring Up Aether

You are now ready to bring Aether on-line, but it is safest to start with a fresh install of Kubernetes, so first type make clean if you still have a cluster running from an earlier stage. Then execute the following two Make targets (again assuming you have already edited the BLUEPRINT variable in MakefileVar.mk):

$ make node-prep
$ make net-prep

Once Kubernetes is running and the network properly configured, you are then ready to bring up the SD-Core as before, but without the ROC:

$ make 5g-core

You can verify the installation by running kubectl just as you did in Stage 1. You should see all pods with status Running, keeping in mind that you will see containers that implement the 4G core instead of the 5G core running in the omec namespace if you configured for that scenario.

We postpone bringing up the ROC until Stage 4 (having fewer moving parts makes debugging the configuration easier), but you may want to bring up the monitoring system at this point, as it provides useful information about the progress you’re making:

$ make 5g-monitoring

Note that the monitoring subsystem can be instantiated before or after the Core, and correctly runs after restarts of the Core.

Validate Configuration

Regardless of whether you bring up a 4G or 5G version of the Control Plane, the UPF pod implements SD-Core’s User Plane. To verify that the UPF is properly connected to the network, you can check to see that the Macvlan networks core and access are properly configured on your server. This can be done using ip, and you should see results similar to the following:

$ ip addr show core
15: core@ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 06:f7:7c:65:31:fc brd ff:ff:ff:ff:ff:ff
    inet 192.168.250.1/24 brd 192.168.250.255 scope global core
       valid_lft forever preferred_lft forever
    inet6 fe80::4f7:7cff:fe65:31fc/64 scope link
       valid_lft forever preferred_lft forever

$ ip addr show access
14: access@ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 82:ef:d3:bb:d3:74 brd ff:ff:ff:ff:ff:ff
    inet 192.168.252.1/24 brd 192.168.252.255 scope global access
       valid_lft forever preferred_lft forever
    inet6 fe80::80ef:d3ff:febb:d374/64 scope link
       valid_lft forever preferred_lft forever

Understanding why these two interfaces exist is helpful in troubleshooting your deployment. They enable the UPF to exchange packets with the gNB (access) and the Internet (core). In 3GPP terms, these correspond to the N3 and N6 interfaces, respectively, as shown in Figure 35. But these two interfaces exist both inside and outside the UPF. The above output from ip shows the two outside interfaces; kubectl can be used to see what’s running inside the UPF, where access and core are the last two interfaces shown below:

$ kubectl -n omec exec -ti upf-0 bessd -- ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
    valid_lft forever preferred_lft forever
3: eth0@if30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
    link/ether 8a:e2:64:10:4e:be brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.84.19/32 scope global eth0
    valid_lft forever preferred_lft forever
    inet6 fe80::88e2:64ff:fe10:4ebe/64 scope link
    valid_lft forever preferred_lft forever
4: access@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 82:b4:ea:00:50:3e brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.252.3/24 brd 192.168.252.255 scope global access
    valid_lft forever preferred_lft forever
    inet6 fe80::80b4:eaff:fe00:503e/64 scope link
    valid_lft forever preferred_lft forever
5: core@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 4e:ac:69:31:a3:88 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.250.3/24 brd 192.168.250.255 scope global core
    valid_lft forever preferred_lft forever
    inet6 fe80::4cac:69ff:fe31:a388/64 scope link
    valid_lft forever preferred_lft forever

All four are Macvlan interfaces bridged with DATA_IFACE. There are two subnets on this bridge: the two access interfaces are on 192.168.252.0/24 and the two core interfaces are on 192.168.250.0/24. Note that while we refer to core and access as interfaces in the context of a particular compute environment (e.g., the UPF container), they can also be viewed as virtual bridges or virtual links connecting a pair of compute environments (e.g., the host server and the UPF container). This makes the schematic shown in Figure 47 a helpful way to visualize the setup.

../_images/Slide241.png

Figure 47. The UPF container running inside the server hosting Aether, with core and access bridging the two. Information shown in gray (10.76.28.187, 10.76.28.113, enp193s0f0) is specific to a particular deployment site.

In this setting, the access interface inside the UPF has an IP address of 192.168.252.3; this is the destination IP address of GTP-encapsulated user plane packets from the gNB. In order for these packets to find their way to the UPF, they must arrive on the DATA_IFACE interface and then be forwarded on the access interface outside the UPF. (As described later in this section, it is possible to configure a static route on the gNB to send the GTP packets to DATA_IFACE.) Forwarding the packets to the access interface is done by the following kernel route, which should be present if your Aether installation was successful:

$ route -n | grep "Iface\|access"
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.252.0   0.0.0.0         255.255.255.0   U     0      0        0 access

The high-level behavior of the UPF is to forward packets between its access and core interfaces, while at the same time removing/adding GTP encapsulation on the access side. Upstream packets arriving on the access side from a UE have their GTP headers removed and the raw IP packets are forwarded to the core interface. The routes inside the UPF’s bessd container will look something like this:

$ kubectl -n omec exec -ti upf-0 -c bessd -- ip route
default via 169.254.1.1 dev eth0
default via 192.168.250.1 dev core metric 110
10.76.28.0/24 via 192.168.252.1 dev access
10.76.28.113 via 169.254.1.1 dev eth0
169.254.1.1 dev eth0 scope link
192.168.250.0/24 dev core proto kernel scope link src 192.168.250.3
192.168.252.0/24 dev access proto kernel scope link src 192.168.252.3

The default route via 192.168.250.1 is directing upstream packets to the Internet via the core interface, with a next hop of the core interface outside the UPF. These packets undergo source NAT in the kernel and are sent to the IP destination in the packet. This means that the 172.250.0.0/16 addresses assigned to UEs are not visible beyond the Aether server. The return (downstream) packets undergo reverse NAT and now have a destination IP address of the UE. They are forwarded by the kernel to the core interface by these rules on the server:

$ route -n | grep "Iface\|core"
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
172.250.0.0     192.168.250.3   255.255.0.0     UG    0      0        0 core
192.168.250.0   0.0.0.0         255.255.255.0   U     0      0        0 core

The first rule above matches packets to the UEs on the 172.250.0.0/16 subnet. The next hop for these packets is the core IP address inside the UPF. The second rule says that next hop address is reachable on the core interface outside the UPF. As a result, the downstream packets arrive in the UPF where they are GTP-encapsulated with the IP address of the gNB.

Note that if you are not finding access and core interfaces outside the UPF, the following commands can be used to create these two interfaces manually:

$ ip link add core link <DATA_IFACE> type macvlan mode bridge 192.168.250.3
$ ip link add access link <DATA_IFACE> type macvlan mode bridge 192.168.252.3

gNodeB Setup

Once the SD-Core is up and running, we are ready to bring up the physical gNodeB. The details of how to do this depend on the gNB you are using, but we identify the main issues you need to address. For example 4G and 5G small cells commonly used with Aether, we recommend the two SERCOMM devices on the ONF MarketPlace:

The first of these (4G eNB) is documented in the Aether Guide. The second of these (5G gNB) includes a Users Guide. We use details from the SERCOMM gNB in the following to make the discussion concrete, where the gNB is assigned IP address 10.76.28.187 and per our running example, the server hosting Aether is at IP address 10.76.28.113. (Recall that we assume these are both on the same subnet.) See Figure 48 for a screenshot of the SERCOMM gNB management dashboard, which we reference in the instructions that follow:

../_images/Sercomm.png

Figure 48. Management dashboard on the Sercomm gNB, showing the dropdown Settings menu overlayed on the NR Cell Configuration page (which shows default radio settings).

  1. Connect to Management Interface. Start by connecting a laptop directly to the LAN port on the small cell, pointing your laptop’s web browser at the device’s management page (https://10.10.10.189). You will need to assign your laptop an IP address on the same subnet (e.g., 10.10.10.100). Once connected, log in with the provided credentials (login=sc_femto, password=scHt3pp).

  2. Configure WAN. Visit the Settings > WAN page to configure how the small cell connects to the Internet via its WAN port, either dynamically using DHCP or statically by setting the device’s IP address (10.76.28.187) and default gateway (10.76.28.1).

  3. Access Remote Management. Once on the Internet, it should be possible to reach the management dashboard without being directly connected to the LAN port (https://10.76.28.187).

  4. Connect GPS. Connect the small cell’s GPS antenna to the GPS port, and place the antenna so it has line-of-site to the sky (i.e., place it in a window). The Status page of the management dashboard should report its latitude, longitude, and fix time.

  5. Spectrum Access System. One reason the radio needs GPS is so it can report its location to a Spectrum Access System (SAS), a requirement in the US to coordinate access to the CBRS Spectrum in the 3.5 GHz band. For example, the production deployment of Aether uses the Google SAS portal, which the small cell can be configured to query periodically. To do so, visit the Settings > SAS page. Acquiring the credentials needed to access the SAS requires you go through a certification process, but as a practical matter, it may be possible to test an isolated/low-power femto cell indoors before completing that process. Consult with your local network administrator.

  6. Configure Radio Parameters. Visit the Settings > NR Cell Configuration page (shown in the figure) to set parameters that control the radio. It should be sufficient to use the default settings when getting started.

  7. Configure the PLMN. Visit the Settings > 5GC page to set the PLMN identifier on the small cell (00101) to match the MCC/MNC values (001 / 01 ) specified in the Core.

  8. Connect to Aether Control Plane. Also on the Settings > 5GC page, define the AMF Address to be the IP address of your Aether server (e.g., 10.76.28.113). Aether’s SD-Core is configured to expose the corresponding AMF via a well-known port, so the server’s IP address is sufficient to establish connectivity. (The same is true for the MME on a 4G small cell.) The Status page of the management dashboard should confirm that control interface is established.

  9. Connect to Aether User Plane. As described in an earlier section, the Aether User Plane (UPF) is running at IP address 192.168.252.3 in both the 4G and 5G cases. Connecting to that address requires installing a route to subnet 192.168.252.0/24. How you install this route is device and site-dependent. If the small cell provides a means to install static routes, then a route to destination 192.168.252.0/24 via gateway 10.76.28.113 (the server hosting Aether) will work. (This is the case for the SERCOMM eNB). If the small cell does not allow static routes (as is the case for the SERCOMM gNB), then 10.76.28.113 can be installed as the default gateway, but doing so requires that your server also be configured to forward IP packets on to the Internet.

Run Diagnostics

Successfully connecting a UE to the Internet is not a straightforward exercise. It involves configuring the UE, gNB, and SD-Core software in a consistent way; establishing SCTP-based control plane (N2) and GTP-based user plane (N3) connections between the base station and Mobile Core; and traversing multiple IP subnets along the end-to-end path.

The UE and gNB provide limited diagnostic tools. For example, it’s possible to run ping and traceroute from both. You can also run the ksniff tool described in Stage 1, but the most helpful packet traces you can capture are shown in the following commands. You can run these on the Aether server, where we use our example enp193s0f0 interface for illustrative purposes:

$ sudo tcpdump -i any sctp -w sctp-test.pcap
$ sudo tcpdump -i enp193s0f0 port 2152 -w gtp-outside.pcap
$ sudo tcpdump -i access port 2152 -w gtp-inside.pcap
$ sudo tcpdump -i core net 172.250.0.0/16 -w n6-inside.pcap
$ sudo tcpdump -i enp193s0f0 net 172.250.0.0/16 -w n6-outside.pcap

The first trace, saved in file sctp.pcap, captures SCTP packets sent to establish the control path between the base station and the Mobile Core (i.e., N2 messages). Toggling “Mobile Data” on the UE, for example by turning Airplane Mode off and on, will generate the relevant control plane traffic.

The second and third traces, saved in files gtp-outside.pcap and gtp-inside.pcap, respectively, capture GTP packets (tunneled through port 2152 ) on the RAN side of the UPF. Setting the interface to enp193s0f0 corresponds to “outside” the UPF and setting the interface to access corresponds to “inside” the UPF. Running ping from the UE will generate the relevant user plane (N3) traffic.

Similarly, the fourth and fifth traces, saved in files n6-inside.pcap and n6-outside.pcap, respectively, capture IP packets on the Internet side of the UPF (which is known as the N6 interface in 3GPP). In these two tests, net 172.250.0.0/16 corresponds to the IP addresses assigned to UEs by the SMF. Running ping from the UE will generate the relevant user plane traffic.

If the gtp-outside.pcap has packets and the gtp-inside.pcap is empty (no packets captured), you may run the following commands to make sure packets are forwarded from the enp193s0f0 interface to the access interface and vice versa:

$ sudo iptables -A FORWARD -i enp193s0f0 -o access -j ACCEPT
$ sudo iptables -A FORWARD -i access -o enp193s0f0 -j ACCEPT