This example describes how to setup a dual-mode session-aware load balancing cluster (SLBC) consisting of two FortiGate-5144C chassis, four FortiController-5903Cs (two in each chassis), and six FortiGate-5001Ds (three in each chassis) acting as workers. A dual-mode configuration provides eight redundant 40Gbps network connections. The FortiController-5144C is required to supply enough power for the FortiController-5903Cs and provide 40Gpbs fabric backplane communication.
In this dual-mode configuration, the FortiController in chassis 1 slot 1 is configured to become the primary unit. Both of the FortiControllers in chassis 1 receive traffic and load balance it to the workers in chassis 1. In dual-mode configuration the front panel interfaces of both FortiControllers are active. All networks have single connections to the FortiController in slot 1 or the FortiController in slot 2. The front panel F1 to F4 interfaces of the FortiController in slot 1 are named fctrl1/f1 to fctrl1/f4 and the front panel F1 to F4 interfaces of the FortiController in slot 2 are named fctrl2/f1 to fctrl2/f4.
The network connections to the FortiControllers in chassis 1 are duplicated with the FortiControllers in chassis 2. If one of the FortiControllers in chassis 1 fails, the FortiController in chassis 2 slot 1 becomes the primary FortiController and all traffic fails over to the FortiControllers in chassis 2. If one of the FortiControllers in chassis 2 fails, the remaining FortiController in chassis 2 keeps processing traffic received by its front panel interfaces. Traffic to and from the failed FortiController is lost.
Heartbeat, base control, base management, and session sync communication is established between the chassis using the FortiController B1 and B2 interfaces. Connect all of the B1 interfaces together using a 10 Gbps switch. Collect all of the B2 interfaces together using another 10 Gbps switch. Using the same switch for the B1 and B2 interfaces is not recommended and requires a double VLAN tagging configuration.
The switches must be configured to support the following VLAN tags and subnets used by the traffic on the B1 and B2 interfaces:
- Heartbeat traffic uses VLAN 999.
- Base control traffic on the 10.101.11.0/255.255.255.0 subnet uses VLAN 301.
- Base management on the 10.101.10.0/255.255.255.0 subnet uses VLAN 101.
- Session sync traffic uses VLAN 1900 and 1901.
This example sets the device priority of the FortiController in chassis 1 slot 1 higher than the device priority of the other FortiControllers to make sure that the FortiController in chassis 1 slot 1 becomes the primary FortiController for the cluster. Override is also enabled on the FortiController in chassis 1 slot 1. Override may cause the cluster to negotiate more often to select the primary unit. This makes it more likely that the unit that you select to be the primary unit will actually be the primary unit; but enabling override can also cause the cluster to negotiate more often.
1. Setting up the Hardware |
|
Install two FortiGate-5144C series chassis and connect them to power. Ideally each chassis should be connected to a separate power circuit. Install FortiControllers in slot 1 and 2 of each chassis. Install the workers in slots 3, 4, and 5 of each chassis. The workers must be installed in the same slots in both chassis. Power on both chassis.
Check the chassis, FortiController, and FortiGate LEDs to verify that all components are operating normally (to check normal operation LED status, see the FortiGate-5000 series documents available here). Create redundant network connections to FortiController front panel interfaces. In this example, a redundant connection to the Internet is made to the F1 interface of the FortiController in chassis 1 slot 1 and the F1 interface of the FortiController in chassis 2 slot 1. This becomes the fctl1/f1 interface. As well, a redundant connection to the internal network is made to the F3 interface of the FortiController in chassis 1 slot 2 and the F3 interface of the FortiController in chassis 2 slot 2. This becomes the fctl2/f3 interface. Create the heartbeat links by connecting the FortiController B1 interfaces together and the FortiController B2 interfaces together. Connect the mgmt interfaces of all of the FortiControllers to the internal network or any network from which you want to manage the cluster. Check the FortiSwitch-ATCA release notes and install the latest supported firmware on the FortiControllers and on the workers. Get FortiController firmware from the Fortinet Support site. Select the FortiSwitch-ATCA product. |
|
2. Configuring the FortiController in Chassis 1 Slot 1 |
|
This will become the primary FortiController. Connect to the GUI (using HTTPS) or CLI (using SSH) of the FortiController in chassis 1 slot 1 with the default IP address (https://192.168.1.99) or connect to the FortiController CLI through the console port (Bits per second: 9600, Data bits: 8, Parity: None, Stop bits: 1, Flow control: None). | |
From the Dashboard System Information widget, set the Host Name to ch1-slot1, or enter this command. | config system global |
Add a password for the admin administrator account. You can either use the Administrators widget on the GUI or enter this command. | config admin user |
Change the FortiController mgmt interface IP address. Use the GUI Management Port widget or enter this command. | config system interface |
If you need to add a default route for the management IP address, enter this command. | config route static |
Set the chassis type that you are using. | config system global |
Enable FortiController session sync. | config load-balance setting |
Configure Dual mode HA. From the FortiController GUI System Information widget, beside HA Status select Configure.
Set Mode to Dual Mode, set the Device Priority to 250, change the Group ID, select Enable Override, enable Chassis Redundancy, set Chassis ID to 1 and move the b1 and b2 interfaces to the Selected column and select OK. |
|
Enter these commands to use the FortiController front panel F4 interface for session sync communication. | config system ha |
You can also enter the complete HA configuration with this command. | config system ha |
If you have more than one cluster on the same network, each cluster should have a different Group ID. Changing the Group ID changes the cluster interface virtual MAC addresses. If your group ID setting causes a MAC address conflict you can select a different Group ID. The default Group ID of 0 is not a good choice and normally should be changed.
You can also adjust other HA settings. For example, you could change the VLAN to use for HA heartbeat traffic if it conflicts with a VLAN on your network. You can also adjust the Heartbeat Interval and Number of Heartbeats lost to adjust how quickly the cluster determines one of the FortiControllers has failed. |
|
3. Configuring the FortiController in Chassis 1 Slot 2 |
|
Log into the FortiController in chassis 1 slot 2.
Enter these commands to set the host name to ch1-slot2, to configure the mgmt interface, and to duplicate the HA configuration of the FortiController in slot 1. Except, do not select Enable Override and set the Device Priority to a lower value (for example, 10). All other configuration settings are synchronized from the primary FortiController when the cluster forms. |
config system global |
4. Configuring the FortiController in Chassis 2 Slot 1 |
|
Log into the FortiController in chassis 2 slot 1.
Enter these commands to set the host name to ch2-slot1, to configure the mgmt interface, and to duplicate the HA configuration of the FortiController in chassis 1 slot 1. Except, do not select Enable Override and set the Device Priority to a lower value (for example, 10), and set the Chassis ID to 2. All other configuration settings are synchronized from the primary FortiController when the cluster forms. |
config system global |
5. Configuring the FortiController in Chassis 2 Slot 2 |
|
Log into the FortiController in chassis 2 slot 2.
Enter these commands to set the host name to ch2-slot2, to configure the mgmt interface, and to duplicate the HA configuration of the FortiController in chassis 1 slot 1. Except, do not select Enable Override and set the Device Priority to a lower value (for example, 10), and set the Chassis ID to 2. All other configuration settings are synchronized from the primary FortiController when the cluster forms. |
config system global |
6. Configuring the cluster |
|
After a short time the FortiControllers restart in HA mode and form an active-passive SLBC. All of the FortiControllers must have the same HA configuration and at least one heartbeat link (the B1 and B2 interfaces) must be connected. If the FortiControllers are unable to form a cluster, check to make sure that they all have the same HA configuration. Also they can’t form a cluster if the heartbeat interfaces (B1 and B2) are not connected.
With the configuration described in the previous steps, the FortiController in chassis 1 slot 1 should become the primary FortiController and you can log into the cluster using the management IP address that you assigned to this FortiController. The other FortiControllers become backup FortiControllers. You cannot log into or manage the backup FortiControllers until you configure the cluster External Management IP and add workers to the cluster. Once you do this you can use the External Management IP address and a special port number to manage the backup FortiControllers. This is described below. (You can also connect to any backup FortiController CLI using their console port.) |
|
You can confirm that the cluster has been formed by viewing the FortiController HA configuration. The display should show all four of the FortiControllers in the cluster. | |
You can also go to Load Balance > Status to see the status of FortiControllers (both slot icons should be green because both FortiControllers process traffic). | |
Go to Load Balance > Config to add the workers to the cluster by selecting Edit and moving the slots that contain workers to the Members list.
The Config page shows the slots in which the cluster expects to find workers. If the workers have not been configured for SLBC operation their status will be Down. Configure the External Management IP/Netmask. Once you have connected workers to the cluster, you can use this IP address to manage and configure all of the devices in the cluster. |
|
You can also enter this command to add slots 3, 4, and 5 to the cluster. | config load-balance setting |
Make sure the FortiController fabric backplane ports are set to the correct speed. Since the workers are FortiGate-5001Ds and the cluster is using FortiGate-5144C chassis, the FortiController fabric backplane interface speed should be set to 40Gbps full duplex. | To change backplane fabric channel interface speeds, from the GUI go to Switch > Fabric Channel and edit the slot-3, slot-4, and slot-5 interface. Set the Speed to 40Gpbs Full-duplex and select OK.
From the CLI enter the following command to change the speed of the slot-4 port. |
You can also enter this command to set the External Management IP and configure management access:
|
|
Enable base management traffic between FortiControllers. The CLI syntax shows setting the default base management VLAN (101). You can use this command to change the base management VLAN. | config load-balance setting |
Enable base control traffic between FortiControllers. The CLI syntax shows setting the default base control VLAN (301). You can use this command to change the base management VLAN. | config load-balance setting |
7. Adding the workers to the cluster |
|
Reset each worker to factory default settings.
If the workers are going to run FortiOS Carrier, add the FortiOS Carrier license instead. This will reset the worker to factory default settings. |
execute factoryreset |
Give the mgmt1 or mgmt2 interface of each worker an IP address and connect these interfaces to your network. This step is optional but useful because when the workers are added to the cluster, these IP addresses are not synchronized, so you can connect to and manage each worker separately. | config system interface |
Optionally give each worker a different hostname. The hostname is also not synchronized and allows you to identify each worker. | config system global |
Register each worker and apply licenses to each worker before adding the workers to the cluster. This includes FortiCloud activation, FortiClient and FortiToken licensing, and entering a license key if you purchased more than 10 Virtual Domains (VDOMs). You can also install any third-party certificates on the primary worker before forming the cluster. Once the cluster is formed, third-party certificates are synchronized to all of the workers. | |
Log into the CLI of each worker and enter this command to set the worker to operate in FortiController mode. The worker restarts and joins the cluster. | config system elbc |
Set the backplane communication speed of the workers to 40Gbps to match the FortiController-5903C . | config system interface |
8. Managing the cluster |
|
After the workers have been added to the cluster you can use the External Management IP to manage the the primary worker. This includes access to the primary worker GUI or CLI, SNMP queries to the primary worker, and using FortiManager to manage the primary worker. As well SNMP traps and log messages are sent from the primary worker with the External Management IP as their source address. And finally connections to FortiGuard for updates, web filtering lookups and so on, all originate from the External Management IP.You can use the external management IP followed by a special port number to manage individual devices in the cluster. The special port number identifies the protocol and the chassis and slot number of the device you want to connect to. In fact this is the only way to manage the backup FortiControllers. The special port number begins with the standard port number for the protocol you are using and is followed by two digits that identify the chassis number and slot number. The port number is determined using the following formula: service_port x 100 + (chassis_id – 1) x 20 + slot_id service_port is the normal port number for the management service (80 for HTTP, 443 for HTTPS, 22 for SSH, 23 for Telnet, 161 for SNMP). chassis_id is the Chassis ID part of the FortiController HA configuration and can be 1 or 2. slot_id is the number of the chassis slot. Some examples:
|
|
You can also manage the primary FortiController using the IP address of its mgmt interface, set up when you first configured the primary FortiController. You can also manage the workers by connecting directly to their mgmt1 or mgmt2 interfaces if you set them up. However, the only way to manage the backup FortiControllers is by using its special port number (or a serial connection to the Console port).
To manage a FortiController using SNMP you need to load the FORTINET-CORE-MIB.mib file into your SNMP manager. You can get this MIB file from the Fortinet support site, in the same location as the current FortiController firmware (select the FortiSwitchATCA product). |
|
On the primary FortiController GUI go to Load Balance > Status. If the workers in chassis 1 are configured correctly they should appear in their appropriate slots
The primary FortiController should be the FortiController in chassis 1 slot 1. The primary FortiController status display includes a Config Master link that you can use to connect to the primary worker. |
|
Log into a backup FortiController GUI (for example by browsing to https://172.20.120.100:44321 to log into the FortiController in chassis 2 slot 1) and go to Load Balance > Status. If the workers in chassis 2 are configured correctly they should appear in their appropriate slots.
The backup FortiController Status page shows the status of the workers in chassis 2 and does not include the Config Master link. |
|
9. Configuring the workers |
|
Configure the workers to process the traffic they receive from the FortiController front panel interfaces. By default all FortiController front panel interfaces are in the worker root VDOM. You can keep them in the root VDOM or create additional VDOMs and move interfaces into them. | |
For example, if you connect the Internet to front panel interface 2 of the FortiController in chassis slot 1 (fctrl1/f2 on the worker GUI and CLI) and the internal network to the front panel interface 6 of the FortiController in slot 2 (fctrl2/f6) you can access the root VDOM and add a policy to allow users on the Internal network to access the Internet. | |
10. Results – View the status of the Primary FortiController |
|
Log into the primary FortiController CLI and enter this command to view the system status of the primary FortiController. | For example, you can use SSH to log into the primary FortiController CLI using the external management IP:
ssh admin@172.20.120.100 -p2201 get system status Version: FortiController-5903C v5.0,build0024,140815 Branch Point: 0024 Serial-Number: FT513B3912000029 BIOS version: 04000009 System Part-Number: P08442-04 Hostname: ch1-slot1 Current HA mode: dual, master System time: Mon Sep 15 10:11:48 2014 Daylight Time Saving: Yes Time Zone: (GMT-8:00)Pacific Time(US&Canada) |
Enter this command to view the load balance status of the primary FortiController and its workers. The command output shows the workers in slots 3, 4, and 5, and status information about each one.
get load-balance status ELBC Master Blade: slot-3 Confsync Master Blade: slot-3 Blades: Working: 3 [ 3 Active 0 Standby] Ready: 0 [ 0 Active 0 Standby] Dead: 0 [ 0 Active 0 Standby] Total: 3 [ 3 Active 0 Standby] Slot 3: Status:Working Function:Active Link: Base: Up Fabric: Up Heartbeat: Management: Good Data: Good Status Message:"Running" Slot 4: Status:Working Function:Active Link: Base: Up Fabric: Up Heartbeat: Management: Good Data: Good Status Message:"Running" Slot 5: Status:Working Function:Active Link: Base: Up Fabric: Up Heartbeat: Management: Good Data: Good Status Message:"Running" Heartbeat: Management: Good Data: Good Status Message:"Running" |
|
Enter this command from the primary FortiController to show the HA status of the FortiControllers. The command output shows a lot of information about the cluster including the host names and chassis and slot locations of the FortiControllers, the number of sessions each FortiController is processing (in this case 0 for each FortiController) the number of failed workers (0 of 3 for each FortiController), the number of FortiController front panel interfaces that are connected (2 for each FortiController) and so on. The final two lines of output also show that the B1 interfaces are connected (status=alive) and the B2 interfaces are not (status=dead). The cluster can still operate with a single heartbeat connection, but redundant heartbeat interfaces are recommended.
diagnose system ha status mode: dual minimize chassis failover: 1 ch1-slot1(FT513B3912000029), Master(priority=0), ip=169.254.128.201, uptime=1517.38, chassis=1(1) slot: 1 sync: conf_sync=1, elbc_sync=1 session: total=0, session_sync=in sync state: worker_failure=0/3, intf_state=(port up:)=0 force-state(0:none) hbdevs: local_interface= b1 best=yes local_interface= b2 best=no ch2-slot1(FT513B3912000051), Slave(priority=2), ip=169.254.128.203, uptime=1490.50, chassis=2(1) slot: 1 sync: conf_sync=1, elbc_sync=1, conn=3(connected) session: total=0, session_sync=in sync state: worker_failure=0/3, intf_state=(port up:)=0 force-state(0:none) hbdevs: local_interface= b1 last_hb_time=82192.16 status=alive local_interface= b2 last_hb_time= 0.00 status=dead ch2-slot2(FT513B3913000168), Slave(priority=3), ip=169.254.128.204, uptime=1476.37, chassis=2(1) slot: 2 sync: conf_sync=1, elbc_sync=1, conn=3(connected) session: total=0, session_sync=in sync state: worker_failure=0/3, intf_state=(port up:)=0 force-state(0:none) hbdevs: local_interface= b1 last_hb_time=82192.27 status=alive local_interface= b2 last_hb_time= 0.00 status=dead ch1-slot2(FT513B3914000006), Slave(priority=1), ip=169.254.128.202, uptime=1504.58, chassis=1(1) slot: 2 sync: conf_sync=1, elbc_sync=1, conn=3(connected) session: total=0, session_sync=in sync state: worker_failure=0/3, intf_state=(port up:)=0 force-state(0:none) hbdevs: local_interface= b1 last_hb_time=82192.16 status=alive local_interface= b2 last_hb_time= 0.00 status=dead |
|
11. Results – View the status of the Chassis 1 Slot 2 FortiController |
|
Log into the chassis 1 slot 2 FortiController CLI and enter this command to view the status of this backup FortiController. | To use SSH:
ssh admin@172.20.120.100 -p2202 get system status Version: FortiController-5903C v5.0,build0024,140815 Branch Point: 0024 Serial-Number: FT513B3914000006 BIOS version: 04000010 System Part-Number: P08442-04 Hostname: ch1-slot2 Current HA mode: dual, backup System time: Mon Sep 15 10:14:53 2014 Daylight Time Saving: Yes Time Zone: (GMT-8:00)Pacific Time(US&Canada) |
Enter this command to view the status of this backup FortiController and its workers.
get load-balance status ELBC Master Blade: slot-3 Confsync Master Blade: slot-3 Blades: Working: 3 [ 3 Active 0 Standby] Ready: 0 [ 0 Active 0 Standby] Dead: 0 [ 0 Active 0 Standby] Total: 3 [ 3 Active 0 Standby] Slot 3: Status:Working Function:Active Link: Base: Down Fabric: Up Heartbeat: Management: Good Data: Good Status Message:"Running" Slot 4: Status:Working Function:Active Link: Base: Down Fabric: Up Heartbeat: Management: Good Data: Good Status Message:"Running" Slot 5: Status:Working Function:Active Link: Base: Down Fabric: Up Heartbeat: Management: Good Data: Good Status Message:"Running" |
|
Enter this command from the FortiController in chassis 1 slot 2 to show the HA status of the FortiControllers. Notice that the FortiController in chassis 1 slot 2 is shown first.
diagnose system ha status mode: dual minimize chassis failover: 1 ch1-slot2(FT513B3914000006), Slave(priority=1), ip=169.254.128.202, uptime=1647.44, chassis=1(1) slot: 2 sync: conf_sync=1, elbc_sync=1 session: total=0, session_sync=in sync state: worker_failure=0/3, intf_state=(port up:)=0 force-state(0:none) hbdevs: local_interface= b1 best=yes local_interface= b2 best=no ch1-slot1(FT513B3912000029), Master(priority=0), ip=169.254.128.201, uptime=1660.17, chassis=1(1) slot: 1 sync: conf_sync=1, elbc_sync=1, conn=3(connected) session: total=0, session_sync=in sync state: worker_failure=0/3, intf_state=(port up:)=0 force-state(0:none) hbdevs: local_interface= b1 last_hb_time=82305.93 status=alive local_interface= b2 last_hb_time= 0.00 status=dead ch2-slot1(FT513B3912000051), Slave(priority=2), ip=169.254.128.203, uptime=1633.27, chassis=2(1) slot: 1 sync: conf_sync=1, elbc_sync=1, conn=3(connected) session: total=0, session_sync=in sync state: worker_failure=0/3, intf_state=(port up:)=0 force-state(0:none) hbdevs: local_interface= b1 last_hb_time=82305.83 status=alive local_interface= b2 last_hb_time= 0.00 status=dead ch2-slot2(FT513B3913000168), Slave(priority=3), ip=169.254.128.204, uptime=1619.12, chassis=2(1) slot: 2 sync: conf_sync=1, elbc_sync=1, conn=3(connected) session: total=0, session_sync=in sync state: worker_failure=0/3, intf_state=(port up:)=0 force-state(0:none) hbdevs: local_interface= b1 last_hb_time=82305.93 status=alive local_interface= b2 last_hb_time= 0.00 status=dead |
|
12. Results – View the status of the Chassis 2 Slot 1 FortiController |
|
Log into the chassis 2 slot 1 FortiController CLI and enter this command to view the status of this backup FortiController. | To use SSH:
ssh admin@172.20.120.100 -p2221 get system status Version: FortiController-5903C v5.0,build0024,140815 Branch Point: 0024 Serial-Number: FT513B3912000051 BIOS version: 04000009 System Part-Number: P08442-04 Hostname: ch2-slot1 Current HA mode: dual, backup System time: Mon Sep 15 10:17:10 2014 Daylight Time Saving: Yes Time Zone: (GMT-8:00)Pacific Time(US&Canada)) |
Enter this command to view the status of this backup FortiController and its workers.
get load-balance status ELBC Master Blade: slot-3 Confsync Master Blade: N/A Blades: Working: 3 [ 3 Active 0 Standby] Ready: 0 [ 0 Active 0 Standby] Dead: 0 [ 0 Active 0 Standby] Total: 3 [ 3 Active 0 Standby] Slot 3: Status:Working Function:Active Link: Base: Up Fabric: Up Heartbeat: Management: Good Data: Good Status Message:"Running" Slot 4: Status:Working Function:Active Link: Base: Up Fabric: Up Heartbeat: Management: Good Data: Good Status Message:"Running" Slot 5: Status:Working Function:Active Link: Base: Up Fabric: Up Heartbeat: Management: Good Data: Good Status Message:"Running" |
|
Enter this command from the FortiController in chassis 2 slot 1 to show the HA status of the FortiControllers. Notice that the FortiController in chassis 2 slot 1 is shown first.
diagnose system ha status mode: dual minimize chassis failover: 1 ch2-slot1(FT513B3912000051), Slave(priority=2), ip=169.254.128.203, uptime=1785.61, chassis=2(1) slot: 1 sync: conf_sync=1, elbc_sync=1 session: total=0, session_sync=in sync state: worker_failure=0/3, intf_state=(port up:)=0 force-state(0:none) hbdevs: local_interface= b1 best=yes local_interface= b2 best=no ch1-slot1(FT513B3912000029), Master(priority=0), ip=169.254.128.201, uptime=1812.38, chassis=1(1) slot: 1 sync: conf_sync=1, elbc_sync=1, conn=3(connected) session: total=0, session_sync=in sync state: worker_failure=0/3, intf_state=(port up:)=0 force-state(0:none) hbdevs: local_interface= b1 last_hb_time=79145.95 status=alive local_interface= b2 last_hb_time= 0.00 status=dead ch2-slot2(FT513B3913000168), Slave(priority=3), ip=169.254.128.204, uptime=1771.36, chassis=2(1) slot: 2 sync: conf_sync=1, elbc_sync=1, conn=3(connected) session: total=0, session_sync=in sync state: worker_failure=0/3, intf_state=(port up:)=0 force-state(0:none) hbdevs: local_interface= b1 last_hb_time=79145.99 status=alive local_interface= b2 last_hb_time= 0.00 status=dead ch1-slot2(FT513B3914000006), Slave(priority=1), ip=169.254.128.202, uptime=1799.56, chassis=1(1) slot: 2 sync: conf_sync=1, elbc_sync=1, conn=3(connected) session: total=0, session_sync=in sync state: worker_failure=0/3, intf_state=(port up:)=0 force-state(0:none) hbdevs: local_interface= b1 last_hb_time=79145.86 status=alive local_interface= b2 last_hb_time= 0.00 status=dead |
|
13. Results – View the status of the Chassis 2 Slot 2 FortiController |
|
Log into the chassis 2 slot 2 FortiController CLI and enter this command to view the status of this backup FortiController. | To use SSH:
ssh admin@172.20.120.100 -p2222 get system status Version: FortiController-5903C v5.0,build0024,140815 Branch Point: 0024 Serial-Number: FT513B3913000168 BIOS version: 04000010 System Part-Number: P08442-04 Hostname: ch2-slot2 Current HA mode: dual, backup System time: Mon Sep 15 10:20:00 2014 Daylight Time Saving: Yes Time Zone: (GMT-8:00)Pacific Time(US&Canada) |
Enter this command to view the status of the backup FortiController and its workers.
get load-balance status ELBC Master Blade: slot-3 Confsync Master Blade: N/A Blades: Working: 3 [ 3 Active 0 Standby] Ready: 0 [ 0 Active 0 Standby] Dead: 0 [ 0 Active 0 Standby] Total: 3 [ 3 Active 0 Standby] Slot 3: Status:Working Function:Active Link: Base: Down Fabric: Up Heartbeat: Management: Good Data: Good Status Message:"Running" Slot 4: Status:Working Function:Active Link: Base: Down Fabric: Up Heartbeat: Management: Good Data: Good Status Message:"Running" Slot 5: Status:Working Function:Active Link: Base: Down Fabric: Up Heartbeat: Management: Good Data: Good Status Message:"Running" |
|
Enter this command from the FortiController in chassis 2 slot 2 to show the HA status of the FortiControllers. Notice that the FortiController in chassis 2 slot 2 is shown first.
diagnose system ha status mode: dual minimize chassis failover: 1 ch2-slot2(FT513B3913000168), Slave(priority=3), ip=169.254.128.204, uptime=1874.39, chassis=2(1) slot: 2 sync: conf_sync=1, elbc_sync=1 session: total=0, session_sync=in sync state: worker_failure=0/3, intf_state=(port up:)=0 force-state(0:none) hbdevs: local_interface= b1 best=yes local_interface= b2 best=no ch1-slot1(FT513B3912000029), Master(priority=0), ip=169.254.128.201, uptime=1915.59, chassis=1(1) slot: 1 sync: conf_sync=1, elbc_sync=1, conn=3(connected) session: total=0, session_sync=in sync state: worker_failure=0/3, intf_state=(port up:)=0 force-state(0:none) hbdevs: local_interface= b1 last_hb_time=78273.86 status=alive local_interface= b2 last_hb_time= 0.00 status=dead ch2-slot1(FT513B3912000051), Slave(priority=2), ip=169.254.128.203, uptime=1888.78, chassis=2(1) slot: 1 sync: conf_sync=1, elbc_sync=1, conn=3(connected) session: total=0, session_sync=in sync state: worker_failure=0/3, intf_state=(port up:)=0 force-state(0:none) hbdevs: local_interface= b1 last_hb_time=78273.85 status=alive local_interface= b2 last_hb_time= 0.00 status=dead ch1-slot2(FT513B3914000006), Slave(priority=1), ip=169.254.128.202, uptime=1902.72, chassis=1(1) slot: 2 sync: conf_sync=1, elbc_sync=1, conn=3(connected) session: total=0, session_sync=in sync state: worker_failure=0/3, intf_state=(port up:)=0 force-state(0:none) hbdevs: local_interface= b1 last_hb_time=78273.72 status=alive local_interface= b2 last_hb_time= 0.00 status=dead |
The post SLBC Dual-Mode with four FortiController-5903Cs and two chassis appeared first on Fortinet Cookbook.