Hyper Open Edge Cloud

Introducing Rapid.Space for vRAN management

  • Last Update:2022-10-10
  • Version:003
  • Language:en

Agenda

  • Introduction
  • Step 1: provision OS and backhaul
  • Step 2: access ORS from Master
  • Step 3: deploy eNb/gNb software release and instances
  • Step 4: add SIM cards
  • Step 5: monitor
  • Step 6: self-heal
  • Step 7: add a CDN (edge service)
  • Step 8: self upgrade
  • Step 9: billing
  • Next Steps

We will introduce in this document how Rapid.Space OSS/BSS can be used to automate the deployment and operation of a 4G/5G network based on vRAN and edge computing. We will go step by step using the example of the ORS, a self-contained, plug-and-play 4G/5G base station which has been shipped to dozens of users worlwide and which is being operated fully automatically through a cloud-based, cloud native archtecture.

Introduction

First, let us introduce the general concepts of Rapid.Space OSS/BSS, the technology which is used to operate the 4G/5G vRAN network based on Amarisoft software defined radio stack.

Target Audience

  • operators of public vRAN
  • operators of private vRAN

This presentation is targeted at operators of private and public networks that are considering how to operate dozens of vRAN stations, either with a cloud based or through an on-premise approach.

Problems we solve

I have I need to automate I may need
  • Linux servers
  • radio units (RU)
  • Amarisoft software
  • SIM cards
  • deployment
  • configuration
  • monitoring
  • repair
  • optimisation
  • upgrade

of Amarisoft & RU

  • customer subscription, billing and issue tracking
  • latency-optimised IPv6 backhaul for edge & cloud
  • edge services (ex. CDN) at eNodeB/gNodeB

 

The word "OSS/BSS" stands for operation support system / business support system.

The problem it solves is to automate the management of a radio network.

Supposing you already have a bunch of Linux servers, of Amarisoft licenses, of radio heads and of SIM cards, how can you ensure that all these are automatically deployed and operated in a consistent manner at minimal cost.

The "OSS" part solves the problem of automating deployment, configuration, monitoring, repair, optimisation, upgrade of Amarisoft stack and radio units. It provides orchestration, disaster recovery, resource allocation, etc. Thanks to its generic architecture, it can be used to consistently deploy vRAN (eNodeB, gNodeB, IMS, epc, etc.) but also any cloud service (IaaS, PaaS, SaaS).

The "OSS" can also support, as an option, the deployment of so-called "value added" edge services such as content distribution networking or IoT buffering. It can also automate Layer-3 networking with latency-optimised IPv6 backfault.

The "BSS" part, which can be used independently of the "OSS" part, provides customer subscription, billing and issue tracking, for telecom but also for other industries. It is for example used by SANEF Tolling to handle close to two million subscribers of e-tolling service.

How we solve those probems?

Service anything that runs on a computer (ex. gNodeB, CDN, mariadb) based on specified requirements (ex. frequency, availability) to meet customer expectation
Software Release a Service class based on a given upstream software and parameter schema
Software Instance an instance of a Software Release running on a given Compute Node with given parameters that specify the Service requirements
Compute node a computer capable of running multiple Software Instance of multiple Software Release as specified by the Master it is attached to
Master a Software Instance which keeps track of which Software Instances are supposed to be running on which Compute Nodes

Rapid.Space five elementary concepts to solve the core problems of an OSS/BSS: Service, Software Release, Software Instance, Computer Node and Master.

The fundamental idea of Rapid.Space OSS is that "everything is a Service" which is meant to satisfy a customer based on requirements.

A macro-cell providing 4G/5G at 1 Gbps in average, 99.99% of the time to 2,000 UEs is a service. A SIM card which provides access to any of 10,000 base stations of a network as long as the owner has paid its past invoices, is a service. A core network which provides Internet access to 10,000 macro-cells and 5,000,000 UEs, 99,99% of the time, is a service.

Services can be telecom services but also IT services: a billing service, a CDN service, etc.

Services are implemented through so-called Software Release which encapsulate the upstream of a software (source or binary), a schema to define configuration parameters of the service and a collection of scripts or tools to automate deployment, configuration, monitoring, repair, optimisation and upgrade. A Software Release can be supplied to a Compute Node, which makes the Compute Node able to instantiate this Software Release as a Software Instance, and give life to it.

A Software Instance is thus the live instance of a Software Release with certain parameters. For example, the Amarisoft Software Release in Rapid.Space can be instantiated multiple times on the same server (a.k.a. Compute Node). One Software Instance can provide 4G eNb service whereas another one can provide 5G gNb service and a third one an epc core network service. Thanks to the notion of Software Instance type, a single Software Release can be used for different types of mutually related service, either independently or in an orchestrated manner.

A Compute Node is a computer which can be used to deploy Software Instances. A Compute Node usually runs the slapos-node software and is connected to a Master (see bellow). A Compute Node can be a server, a virtual machine, a laptop, a smarphone, a RaspeberryPi, etc.. It can even be a subdirectory of a user directory which runs a minimal implementation of the Master called "slapproxy", which can be useful for development or testing without having to install a full version of the Master software. A Compute Node collects from the Master it is attached to the list of Software Instances which it is supposed to run.

A Master is a special Software Instance which keeps track in a persistent way of which Software Instances are supposed to be running on which Compute Nodes.

Architecture

The Master is the only stateful part of the environment. Whenever a Compute Node crashes or gets destroyed, it can be automatically rebuilt. Simply reinstall the base OS image on a new Compute Node, connect it to the Master and let the Compute Node query the Master. The Compute Node will then know which Software Instances it is supposed to be running and autonomously self-provision them.

Equivalent concepts

Rapid.Space Business Linux Docker
Service sales order no equivalent no equivalent
Software Release model reference an RPM/DEB package + many scripts to automate configuration, monitoring, disaster recovery, billing, etc. a docker image + many scripts to automate configuration, monitoring, disaster recovery, etc. 
Software Instance serial number a couple of entries in /etc/init.d/ and /etc/cron.d to start the software provided in RPM/DEB package and the many scripts to automate configuration, monitoring, disaster recovery, billing, etc. a running container with many scripts to automate configuration, monitoring, disaster recovery, etc. 
Compute node warehouse a server or VM + many packages and files to automate OS management a server or VM attached to Kubernetes
Master ERP no equivalent Kubernetes + Anthos

The architecture of Rapid.Space OSS/BSS is inspired by how businesses run and by the idea of encapsulation.

The idea of Service is the equivalent of sales order in business. It defines what the customer (or user) expects to be provided, either for a service or a product, without specifying how this should be technically provided.

The idea of Software Release is equivalent to the idea of model reference of a product (ex. Mercedes Class E) with its blueprints and bill of materials for each possible variation (ex. color, engine) of type (ex. cabriolet, sedan).

The idea of Software Instance is equivalent to the actual car with a serial number (VIN) and a given color, type, engine, etc.

The Compute Node is equivalent to the warehouse or parking area where cars are stored.

The Master is the equivalent of the ERP which manages the complete process of car production and distribution.

In traditional IT systems, the notions of "Service" or "Master" seldom exists explicitely. They are often implemented through management processes implemented by humans. With the recent fashion for containers, the idea of Master has been embodied by the combination of Google Anthos and Kubernetes. However, the notion of "Service" is still not explicit.

What helps understanding why a "Service" differs from a software package or a container image is the idea of service lifecycle encapsulation which combines the software upstream with scripts or tools that automate configuration, monitoring, disaster recovery, billing, etc. of that software.

For example, if we deploy an Amarisoft eNodeB, we would like to ensure that the eNodeB is not shared by too many UEs, that its performance (KPI) is sufficient most of the time. We would also like to automatically monitor the health of the RU it is connected to and sometimes change eNodeB parameters depending on the time of the day, on environment conditions or on alarms for missed KPIs. In case the server which hosts the eNodeB is destroyed, we would like to restore it automatically. 

This kind of service requirement is often implemented by adding a wide diversity of scripts and tools around the Amarisoft software. One for monitoring (ex. Zabbix), one for orchestration (ex. Google Kubernetes), one for operation management (ex. Google Anthos), one for billing (ex. SAP), one for backup (ex. Veritas). Implementing a service can require to develop dozens of scripts in five to ten programming languages (ex. PHP, bash, python, yaml, etc.) and 5 to 10 different application environments, all managed independently. It is thus very difficult to get a consistent overview of all facets of a service because the logic is split a so many different locations. It is also very hard to test.

In Rapid.Space OSS/BSS, all facets of a Service are encapsulated in a single place: the Software Release. We use a single language to "glue" all facets: buildout. And everything is managed from a single application: the Master.

Design principles: portability

  • Everything is a service instance of a service class
  • Service class encapsulates full service lifecyle (build, deploy, configure, orchestrate, monitor, account, self-repair, disaster recover) 
  • End-to-end testing is possible on a small laptop without being "root"
  • Deploy behind NAT or firewall is easy
  • Every service instance runs on a dedicated IP address range
  • Portable across all Linux distros and possibly other POSIX OS
  • Portable across all architectures (x86, ARM)
  • Minimise resources (CPU, memory, disk)
  • Rely on POSIX security for basic process isolation (no "root" allowed)
  • Rely on paravirtualisation for futher process isolation (qemu)
  • Rely on physical isolation for even futher process isolation
  • System configuration is handled by seperate tool

Rapid.Space OSS/BSS is designed for portability. It does not depend on any Linux distribution. It can, at least in theory, manage services on any POSIX-like operating systems. It can run a nano-computers (ex. ARM, 1 core, 512 MB RAM) or on big servers (ex. AMD, 64 core, 1 TB RAM). 

Design principles: resiliency

  • Trust no network (LAN, WAN, Internet, etc.)
  • Everything that can be local is local
  • Stable even with lots of disconnections
  • Nodes keep on running even if disconnected for long time
  • Reproduce node deployment any time without human intervention
  • Automate node disaster recovery test
  • Recursive (SlapOS inside SlapOS)
  • Self-hosted (SlapOS deployed by SlapOS)
  • Federated (SlapOS delegated to SlapOS)

Rapid.Space OSS/BSS is designed for resiliency. Any node can be recovered from scratch automatically, even after 10 years. Services still run in case of split system or backhaul downtime.

Case Study: ORS

We are going to study how Rapid.Space OSS/BSS works in the case of the Open Radio System (ORS), a small  4G/5G base stations sold by Rapid.Space in Europe mainly.

Step 1: provision OS and backhaul

The first step is to install the base operating system. The base image is just a minimal operating system, such as Linux, capable of connecting to the Master. Once it is connected to the Master, all operations will be achieved from the Master.

Because Amarisoft recommends Ubuntu and Fedora, we tend to use an Ubuntu base for any Compute Node that needs to do vRAN. But for other Compute Nodes, we use Debian or CentOS.  This is not an issue since Rapid.Space OSS/BSS can support in a consistent way multiple operating systems and run the same services despite OS diversity.

Install base OS image

In the case of the ORS, we do a "manual" installation before we close the ORS case. We follow this approach because the sales of ORS are still small (100s). Since manual installation takes less than 5 minutes, there is no reason to automate this step.

But if the sales were growing, we would just "flash" a standard OS image on the SSD and then mount that pre-installed SSD at the factory.

In the case of servers, we use a Raspberry Pi or an OlLnuXino nano-server wich provides PXE boot to the server and unattended remote installation.

Overall, the base OS image can be installed through any imaginable way. Rapid.Space OSS/BSS does not impose any specific solution, nor does it impose any specific base OS or Linux distribution.

Provision Compute Node with Token

Once the base OS image is installed, we go to the Master in the "Computer" section. We then click on "Get Token". This provides us with a token: 20220412-6BC75A.

We then log into the base OS installed in the ORS and we run a simple command line: 

wget https://deploy.erp5.net/vifib
bash vifib

This launches an ansible playbook which downloads all packages (slapos-node, re6stnet, etc.) and requests the user to enter the token (20220412-6BC75A).

Thanks to the token, the operating system is then configured to connect to the Master.

The ORS has become a Compute Node. From now on, it can be managed automatically through the Master.

OSS/BSS is provisionned

slapos node status
/opt/slapgrid
/srv/slapgrid
/opt/slapos/slapos.xml
/etc/cron.d/slapos-node
/opt/slapos/log/slapos-node-software.log
/opt/slapos/log/slapos-node-instance.log
/opt/slapos/log/slapos-node-format.log
/opt/slapos/log/slapos-node-report.log

The fact that the ORS is now acting as a Comppute Node can be observed through different command lines or files specific to SlapOS, the technology behind Rapid.Space OSS/BSS.

First, the slapos command line is now available on the ORS. It is for example possible to query the status of the Compute Node:

slapos node status

Various directories have been created. Software Releases are deployed inside /opt/slapgrid whereas Software Instances are deployed inside /srv/slapgrid.

The configuration of the Computed Node is stored in /opt/slapos/slapos.xml and in /etc/slapos.

The cron entries for SlapOS are located in /etc/cron.d/slapos-node. Log files can be found in /opt/slapos/log/:

  • /opt/slapos/log/slapos-node-software.log for the installation of Software Release;
  • /opt/slapos/log/slapos-node-instance.log for the creation of Software Instances;
  • /opt/slapos/log/slapos-node-format.log for the configuration of system files and permissions for SlapOS;
  • /opt/slapos/log/slapos-node-report.log for the reporting of resource usage.

IPv6 backhaul is provisionned

ip -6 a
ip -6 route
ps aux | grep re6st

As part of the automated provisioning of the ORS, we also configure an IPv6 backhaul based on re6stnet technology.

Each ORS is associated with a unique IPv6 address, managed by re6st so-called "directory" which is itself managed by the Master automatically. The  re6st "directory" automates the process of associating to each Compute Node a specific network configuration based on an X509 certificate which identifies that Compute Node. It does on a WAN the same as a DHCP server does on a LAN based on MAC, but with some extra security features.

With re6st, every Compute Node has a full IPv6 range as we can see by typing:

ip -6 a

Routes are automatically established on every Ethernet interface:

ip -6 route

Virtual interfaces, such as openvpn tunnels, help circumventing congestion on default routes of the backhaul:

ps aux | grep re6st

This is how re6st can be used to minimise latency in well connected networks or to circumvent congestions that can happen at the border of countries such as China.

We could provision many other things...

     

The provisioning procedure of Rapid.Space OSS/BSS onto the ORS can be applied in the same way to many other types of device: servers (ex. OCP Tioga Pass), switches (ex. Edge-core AS5812), Raspberry Pi, OLinuXino, drones, laptops, smartphones, etc.

This is the beauty of the underlying SlapOS technology: it can be applied to virtually any type of computing device and OS. This provides a level of flexibility not often found.

Step 2: access ORS from Master

Now that the ORS has been provisioned with Rapid.Space OSS/BSS software and attached to a Master, all management can be done remotely from the Master.

Master Panel as Operator

The Master has two facets: the Panel and the Backend. The Panel is the facet that is used for most operations related to creation or monitoring of Software Instances (ex. an eNodeB Software Instance). The Panel is used both by operators (those who manage the network or the edge cloud) and by users (those who use the network or the edge cloud). Interaction between users and operators is implemented through Support Requests.

The Panel is used to access Services (Software Instance) and Servers (Computer Nodes). Compute Nodes can be grouped by Network, Site and Project.

A Dashboard provides an overview of pending monitoring alarms and Support Requests.

The Panel also provides access to invoice history.

ORS in Master Panel as operator

The ORS that was previously installed appears in the Panel as a Compute Node. The RSS buttons provide an URL to an RSS feed which can be used to monitor the ORS state from third party applications.

Note: because this video was taken at the end of the installation procedure (after Step 3), we can observe that the Amarisoft Software Release has already been supplied.

Master Backend as operator

The other facet of the Master is the Backend. It is a complete ERP system that can be used to implement virtually any billing use case. It is used for example by SANEF highways in France for close to 2 million subscribers.

The main purpose of the Backend is to define workflows and worklists which structure the daily operation management of the network or the edge cloud. This daily work consists mainly in Support Request handling and incident resolution. However, it could also include tasks related to the logistic of the network or cloud (ex. servers to move from one site to another).

It is important to note that both the Panel and the Backend are in reality the same Master application and the same database with a different user interface. This interface can be customised or complemented by specialised web sites, as for example in Bip&Go which is the customer front-end for a billing system sharing the same ERP5 technology as the one used for Rapid.Space OSS/BSS Master.

Master Panel as user

Users can also access the Panel of Rapid.Space OSS/BSS with the same features. Securiy policies help restricting what users can do in the application vs. operators. The Panel is very important for users since it is the location where invoices are displayed and new services (ex. SIM Card) are requested.

ORS in Master Panel as user

Users can access their ORS in the Panel and see the same information and configuration parameters as operators do. The use case here is for a private network where the ORS is deployed. Users can, in particular, deploy additional edge services into their ORS.

Access Master through CLI and API

All aspects of the Rapid.Space OSS/BSS can be managed using a command line interface (CLI) or APIs. This is true for both operators and users.

Step 3: deploy eNb/gNb software release and instances

Now that we have access to the ORS in the Master, it is time to deploy a 4G/5G eNb/gNb Service by supplying a Software Release and creating a Software Instance.

Supply Software Release to Compute Node

In order to supply a Software Release, we first access the ORS Compute Node and we use the "Supply" button to supply the Amarisoft ORS Software Release. If multiple versions exist, we can select any of them. It is possible in particular to supply multiple versions, which can be useful if we need to run multiple Software Instances with different versions.

Create eNb/gNb Software Instance

We will now create Software Instance for Amarisoft Software Release. For convienience, we request two Software Instances of the same Software Release but with different type. One is used to deliver 4G eNb service whereas the other delivers 5G gNb service. By starting/stopping each Software Instance, we can switch from 4G to 5G without losing configuration parameters. 

Step 4: add SIM cards

Once the eNb/gNb is set and running, we need to provision SIM cards. We use in Rapid.Space OSS/BSS the same approach for SIM cards as for eNb/gNb: a SIM card is a subscription service, which the user can access to in the Panel and for which it can be invoiced.

Pre-configured SIM Cards

In our example, we suppose that we have sim cards that are already prepared. Each SIM card has its own identification parameters. In the case of Rapid.Space private vRAN application based on ORS, we use test SIM cards that are provisioned using a USB adapter. On a commercial public network, SIM cards are "production" SIM cards already provisioned at the factory. 

Provision OSS/BSS with SIM cards

We use the Panel CLI and a python script to provision script cards by reading a file which already contains all identifiers. This way, we do not need to use the GUI.

SIM cards in OSS/BSS GUI

We can open the Panel GUI at any time to display all SIM cards that have been provisioned. In the case of Rapid.Space private vRAN, SIM cards are provisioned for each ORS. On a public network, they would be provisioned globally.

Step 5: monitor

Now that SIM cards are provisioned and ORS is running eNb or gNb, we can start monitoring the network. The monitoring concepts of Rapid.Space OSS/BSS are:

  • promise
  • alarm
  • monitoring tool

The promise defines parameters that should be monitored and the values that are expected. This concept was created by Mark Burgess in 1991 and is at the core of most operation management systems since then, starting with Cfengine (see "SlapOS: 12 yeaars of edge computing"). 

If a promise fails, an alarm is raised and the system will try to self-heal (see next step).

In case of failure, operators (or users in a private vRAN) can use the monitoring tool to access the history of alarms and the logs.

Monitoring GUI

The state of alarms is visible in the Panel. If everything is green, it means that everything is fine.

If a Compute Node becomes red, it means that it has become unreachable.

If a Software Instance becomes red, it means that one of its promises is failing. If it is orange, it means that no information has been sent to the Master for some time.

If we click on the green button of Software Instance, we enter the monitoring tool.

The monitoring tool is a separate HTML5 application which directly connects to the ORS, downloads log files and displays them. We can see in particular the state and history of each promise. We can also access Amarisoft log files. By keeping log files locally rather than centralising them, Rapid.Space OSS/BSS remains scalable. 

if some log files need to be saved on a central big data platform, such as Wendelin, to create or update AI models, this can be achieved by deploying additional fluentd/fluentbit agents on a subset of the running ORS. Again, by keeping log files locally and by selecting which Compute Node should upload them to a central data lake, Rapid.Space OSS/BSS architecture remains scalable and does not generate network congestion.

Step 6: self-heal

Whenever an Alarm is raised in Rapid.Space OSS/BSS, the faulty Software Instance will try to self-repair by reconfiguring itself.

Let us suppose an incident happens

Let us for example introduce a simple error by making lteeenb-avx2 process crash on purpose every second.

Bang! the incident is detected

We first observe in the Panel that the Software Instance becomes red (to be added in a future video).

Then buy digging into the Software Instance with the monitoring tool, we observe that some promises becomes red.

And the system is repaired

We then wait a bit. The self-healing process restarts the lteeenb-avx2 process. Everything becomes green again.

Full incident to repair

Let us now observe the whole self-healing process from begining to end.

The same approach could be applied to the dynamic change of eNb/gNb paramaters. We could for example dynamically change the share of spectrum (5 MHz, 10 MHz, 20 MHz, etc.) allocated to 4G vs. 5G on a TDD BBU  based on the number of UEs that can support LTE vs. NR SA on a single frequency and the target minimal bandwidth per UE.

Step 7: add a CDN (edge service)

One of the unique aspects of Rapid.Space OSS/BSS is that the same Master can manage, at the same time, deployment of cloud services, edge services and vRAN services. Let us see now an example of edge service which can be useful if it is deployed next to a vRAN service.

Request a CDN on ORS

If we go back to the ORS Compute Node, we can observe that it is already supplied with Amarisoft Software Release.

We will now supply a Software Release which implements a complete CDN service. This CDN is based on a software called "Caddy" and another software called "Apache Traffic Server".

If the ORS is deployed in "all-in-one" mode with epc/5gc running next to eNb/gNb, then it is possible to route all TCP/IP traffic to the local CDN running also on the ORS. This will bring two types of acceleration to the HTTPS traffic:

  1. static assets (CSS, JS, images, etc.) will be cached locally;
  2. latency to establish the initial HTTPS connection will be reduced to less than 100ms rather than seconds in some cases.

This is very useful for the deployment of business applications such as ERPs on remote private networks such as factory networks. It can also be useful in some cases to optimise bandwidth on the backhaul.

Other edge applications include IoT buffering. This can be useful for connected things using protocols such as MQTT. Having a big buffer next to the eNb/gNb can compensate for congestion or instability on the backhaul, as well as for the lack of buffering on the IoT.

Step 8: self upgrade

Rapid.Space OSS/BSS solves the problem of the upgrade at two different levels: the base image and the Software Release.

Publish an OS upgrade request

Upgrading the base OS image depends on each operating system or Linux distribution. In order to handle this diversity, Rapid.Space OSS/BSS uses the concept of "signed upgrade order".

An "upgrade order" defines a script which is supposed to be executed on a host to implement the upgrade procedure. In the case of ORS, this script is implemented as an Ansible playbook which defines the target state at the end of the upgrade. The upgrade order is published on a public web site. The Compute Node (ORS) recurringly downloads and access the upgrade order and checks if any upgrade must be applied.

The upgrade order is signed by the operator. Before executing an upgrade order, the Compute Node (ORS) ensures that the signature is consistent.

It then executes the upgrade order.

Overall, the procedure to upgrade the base OS of 1000s of Compute Nodes is the following:

  1. prepare an Ansible playbook
  2. sign and publish the Ansible playbook in an "upgrade order"
  3. let all Compute Nodes download and execute the upgrade order based on the Ansible playbook

This procedure is compatible with the deployment of Computed Nodes behind NAT or firewall.

The notion of "upgrade order" could also be implemented with other technologies than Ansible and is thus not dependent on the use of Linux or other OS.

Please keep in mind that "upgrade order" is only for the base OS image. For the running eNb/gNb Software Instance, everything goes through the Master.

Accept a Software Release upgrade

The upgrade procedure for Software Release has two steps:

  1. Supply the new Software Release to the Compute Node (this can be automated by scripts)
  2. Upgrade the Software Instance to the new Software Release (this is automated)

By default, upgrade of Software Instance is automated with a "user approval step". This is suitable for private vRAN. For public networks, this step could be removed.

Step 9: billing

Rapid.Space OSS/BSS can support billing process based on subscriptions (SIM cards) and usage (tracked by core network). Billing is usually very different from one user to another and requires customisation. We will show here some examples in the case of an edge cloud service.

List of invoices

Access to the list of invoices is done through the Master's Panel by clicking on "Invoices")

 

Sample Invoice

In the case of ORS, we provide an "free, unlimited subscription" to Rapid.Space service. This is why the amount is zero.

Next Steps

If you like Rapid.Space OSS/BSS and want to implement your own copy, we can suggest you first to read the existing tutorials and next to request Rapid.Space to implement a proof-of-concept (POC) for you. 

Tutorials

https://handbook.rapid.space/rapidspace-Handbook/rapidspace-Learning.Track

Rapid.Space has already educated more than 200 students to its OSS/BSS thanks to a partnership with Telecom Paris. We have written a step-by-step tutorial that has been followed by students. If needed, we can coach you remotely to learn step by step how to deploy Rapid.Space OSS/BSS.

It is also a good way to learn the archicture of Rapid.Space OSS/BSS and further understand its design principles.

POC

  • 3 months
  • cloud based or on premise
  • focus on a single "key aspect"

If you are interested in deploying a commercial system quickly, we highly recommend to request a proof-of-concept (POC) to Rapid.Space. A POC takes about a month to 3 months to develop.

As part of this POC, it is recommended to focus on a single "key aspect" which has the reputation to be difficult, rather than on UI or appearance. It can be a management or a technical aspect.

This way, the POC helps mitigating the risk of the full implementation of the OSS/BSS. Also, the POC can be used as is to start initial commercialisation of services. This is what happened with Bip & Go service. Initial implementation took a few months. Scaling up took a year. Adding hundreds of subscription contract options and further scaling to millions of users took a few years. The main point of congestion was project management related to business process specification.

In terms of cost, our current assessments show that Rapid.Space OSS/BSS is about 5 times more cost efficient that traditional billing or operation management system used by commercial telcos.

Rapid.Space OSS/BSS: Conclusion

  • Open Source / Free Software
  • Designed for scalability, portability and resiliency
  • Already deployed for private vRAN
  • Already deployed for massive billing
  • Already deployed for public cloud
  • Ready for public vRAN
  • First implement POC, next scale up

If you are interested in deploying Rapid.Space OSS/BSS, feel free to contact us at contact (@) rapid.space

We can provide training or implement a POC.