This section describes Memphis’ architecture

--- description: This section describes Memphis'architecture --- 
# Architecture ### Connectivity Diagram Memphis deployment comprised four components: **1.** UI - The dashboard of Memphis. 
**2.** Broker - Messaging Queue. Memphis broker is a fork of [NATS.io](http://nats.io/), which is an existing and battle-tested messaging queue with Memphis improvements and tunings. 
**3.** MongoDB - Only for UI state persistency (not used for storing messages). Will be replaced in the coming versions. 
 
Consumers are pull-based. The pull interval and the batch size can be configured. Each consumer will consume all the messages residing inside a station. The user must create consumers within the same consumer group if a client requires a horizontal scale and split messages across different consitency group members. 
MongoDB is not necessary for data traffic or standard broker behavior but rather responsible for UI state and metadata only. 
### Cluster mode component diagram (For production) Full Kubernetes-based layout. 
 
### Ordering Ordering is guaranteed only while working with a single consumer group. ![](../.gitbook/assets/ordering.jpeg) ### Mirroring 
Memphis is designed to run as a distributed cluster for a highly available and scalable system. The consensus algorithm responsible for atomicity within Memphis, called RAFT, and compared to Apache ZooKeeper, widely used by other projects like Kafka, does not require a witness or a standalone Quorum. RAFT is also equivalent to Paxos in fault tolerance and performance. 
To ensure data consistency and zero loss within complete broker’s restarts, Memphis brokers should run on different nodes and try to do it automatically. To comply with RAFT requirements which are 1⁄2 cluster size + 1, On K8S environment, three Memphis brokers will be deployed. The minimum number of brokers is three to ensure at least one node failure. 
![](../.gitbook/assets/replications.jpeg) ### Internal Protocol Memphis forked and modified [NATS](https://nats.io) as its core queue. 
The NATS streaming protocol sits atop the core NATS protocol and uses [Google's Protocol Buffers](https://developers.google.com/protocol-buffers/). Protocol buffer messages are marshaled into bytes and published as Memphis messages on the specific station. 
### Deployment sequence 
 
### Requirements 
{% tabs %} {% tab title="Kubernetes" %} **Minimum Requirements (No HA)** 
Resource Quantity  
K8S Nodes 1  
CPU 2 CPU  
Memory 4GB RAM  
Storage 12GB PVC  

****

**Recommended Requirements (HA)** 
| Resource | Minimum Quantity | | --------- | ----------------- | 
| K8S Nodes | CPU | Memory | Storage {% endtab %} 

|3 | | 4 CPU | | 8GB RAM | | 12GB PVC Per node |

{% tab title="Docker" %} **Requirements (No HA)** 
| Resource | Quantity | | -------- | ---------------------- | | OS | Mac / Windows / Linux | 

 

| CPU | 1 CPU | | Memory | 4GB | | Storage | 6GB | {% endtab %} 
{% endtabs %} ### Delivery Guarantee * At least once 
This is achieved by the combination of published messages being persisted to the station as well as the consumer tracking delivery and acknowledgement of each individual message as clients receive and process them. 
* Exactly once 

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *