What is Unified Trees?
My place of employment’s network is large. Nagios says nearly 2000 hosts, split up into a large number of buildings (“well over 100”), but realize that a good portion of those hosts also represent “stacks” (a stack of switches connected such that they are manageable as a single host), and that wireless access points are not included in that count. We are a “campus”, and while yes, I work for higher education, enterprise businesses also term their locations as campuses occasionally.
We use a three layer model:
- ACCESS: The user connects here. The plug on the wall connects back to a closet (Intermediate Distribution Frame) that has a switch or stack of switches that usually connect back to a single closet (Building Distribution Frame) that is then wired to –
- DISTRIBUTION: where a collection of buildings wires back to via fiber. While it is a building in its own right, a given distribution node will contain at least one (but preferably two) sizable chassis based routers (Cisco Catalyst 6500 or Nexus 7000) which do the routing between VLANs that connect through that Distribution Node. Connections to other nodes, the WAN (other data centers, other campuses), and the Internet go through the –
- CORE: A couple of routers providing high speed connectivity (10Gb, but soon 100Gb, if the money is there) to the distribution layer.
This is a “logical simplification” of the layout, by which I mean that our core routers actually exist in separate nodes, but are connected as described above.
Why explain all this? Well, a lot of newer networks, even those using this three layer model, would pack their Cacti install (and it would likely be a single install) in a Data Center, which for most purposes kind of resembles a Distribution Node. However, we have the discussion every so often and come to the agreement that each node can hold a server or two (or more in a couple of cases) that can provide two services:
- Provide a point of analysis where certain network equipment can be monitored via spanning sessions;
- Collect statistics via SNMP using MRTG or (as we’re headed in that direction) Cacti.
This provides us the ability to continue to collect statistics from equipment based out of a given node even if the node is somehow isolated from the core or data center.
Thing is, if someone wants to review the stats of a given device in a given building, they may not know which node the building is out of.
But that’s where Unified Trees comes in
Set up some kind of logical tree structure on all of your Cacti installs (and heck, they may look very similar to each other), fully mesh their Unified Tree connectivity (or not, if you use a version later than 0.1), and it doesn’t matter what Cacti install is responsible for a given building. Browse to one instance, look at the tree, click on the entry for the equipment in question, and suddenly you’re on the right Cacti instance.
Things you need to know if you’re going to use Unified Trees
As I develop this plugin some things are becoming clear about how the install process will be … tricky, and things an admin would need to know right off the bat before using this plugin:
- Of course the MySQL installations will need to be aware of one another, somehow (this was a big issue in 0.1, but for later versions, see the Installation notes and Version notes below for good news). I’ll be including code that allows for the database address and table to be specified separately from the base URL for the Cacti install, but assuming each server has their own database, you’ll need to allow MySQL connections to a server that you want to pull a tree from. 10 servers fully meshed? That could be time consuming (and why the “Client/Server” option in later versions came about). But, that’s how
it’s going to workit worked in the first version; see the Installation and Version notes below. I will also be providing options for separate usernames and passwords for those databases. The good news is that a UT install only needs SELECT access to a remote database.
- There will be Cacti version specific patches that will need to be installed that will change at least one core file (lib/html_trees.php). Hopefully the Cacti devs will provide a plugin hook in future Cacti versions to where this requirement goes away. If they do, I’ll be happily releasing a version of UT that doesn’t require patching for newer versions of Cacti.
- IMPORTANT: If you are using Cacti versions that are not covered by the available patches (so far, only 0.8.8b), a Cacti package that is provided as part of a Linux distribution, or CactiEZ, I cannot provide any assistance in fixing issues you might come across. Even if a Linux distribution provides you with a package that claims to be version 0.8.8b, it’s entirely possible that they have modified the lib/html_tree.php file in the process and this will cause the patch to fail.
- If you’re using any kind of graph/tree specific permissions (usually because you’re using the built-in MySQL based authentication), it’s likely that UT will completely ignore those settings. Besides, you’ll want to have some kind of Single Sign On method or at least have your Cacti installs validate from the same authentication method/database, as …
- When a user clicks on a tree entry that exists on another server, be aware that (without a SSO scheme that tracks authenticated state separately from the servers) it’s very likely that the user will have to authenticate again.
- It’s coded with the end leaves being “host” based. From what I’ve noticed, it looks like you could put a single graph on the tree. I do not know what UT would do when it finds such an entry. I personally have no interest in coding that option in, sorry. I will however look at patches that address the issue.
- Each host should have a different description, especially if the tree structure on one Cacti install has an identical “path” to another. In other words, if it’s “Campus – Building – Host: Blah”, you won’t be able to see entries from other servers that are also “Campus – Building – Host: Blah”.
- Download the plugin, and untar/decompress (“tar xzf”) into your Cacti plugins directory.
- Use the “Plugin Management” settings to “install” and “activate” the plugin. No, you’re not done yet.
- If you’re not using the user “admin” or templated to the user “admin”, you may need to look at your user settings to add the Unified Trees realm.
- First, check the settings for Unified Trees. You’ll find them under “Settings” – “Visual” tab. You could select “Use Unified Trees” now, but you might want to wait for a bit; save the settings when configured as desired.
- On the “Console” tab, look under “Utilities” for the “Unified Trees – Sources”. Go in and add your source databases; be aware that the various Cacti servers will need users configured for each server attempting to access their databases (if using the “Client/Server” model, a “Server” needs access to the databases of all “Clients” and any other “Server”, but a “Client” will need access only to a “Server” database; and yes, you can technically have more than one “Server” for redundancy – but not load balancing – purposes). The only GRANT privilege a given Cacti server will need is SELECT, so
GRANT SELECT ON cacti.* TO 'othercacti'@'otherip' IDENTIFIED BY 'password';
(substituting the appropriate db username, Cacti server address, and password) should suffice. Don’t forget about iptables/local firewall rules. No, you’re not done yet.
- Be aware that after adding a source, the plugin does a connectivity test; if that fails, it will automatically disable the database (but it will leave your settings as configured).
- You will have to patch a core file, and the patches are version specific. For the initial release, only Cacti 0.8.8b is supported. For Linux users, this should only require changing directory to the root of your Cacti install as a user capable of writing files to your Cacti directory (I.E., root), and running:
patch -b -p0 < plugins/unifiedtrees/patch-0.8.8b/html_tree.diff
The patch has been designed to be of minimal impact; you can apply the patch and uninstall the plugin, and Cacti should run normally. In fact, if the “unifiedtrees” plugin is not active, or “Use Unified Trees” is not checked in the settings, you’ll use the original tree code.
Upgrading from 0.1 or 0.5:
If you followed the instructions above appropriately while installing Unified Trees, you should just be able to overwrite all the existing files, and:
- From your Cacti base directory: mv lib/html_tree.php.orig lib/html_tree.php
- Then: patch -b -p0 < plugins/unifiedtrees/patch-0.8.8b/html_tree.diff
Things That Can Go Wrong
- Screwing up database permissions or configurations on or for the other servers.
- Not having a remote database configured on either all other systems (if using “Always”) or all of the servers (if using “Server/Client” and you have multiple servers).
- Having the plugin installed on only some of your servers (this will result in vastly different trees for different installs).
- Forgetting to check “Use Unified Trees” in “Settings”.
- Having an unsupported version or source for Cacti. At this time, only Cacti version 0.8.8b, Linux/Unix tar.gz downloaded from Cacti’s site is supported:
- The Windows ZIP version is unsupported (the files are in “dos” format, and patch will not work properly).
- A user submitted an issue to the GitHub page for Unified Trees indicating that he was using the EPEL repository’s version of Cacti. Upon review, the lib/html_tree.php provided by that package (advertised as version 0.8.8b) was significantly different (probably “forward patched” from the “non-stable” version of Cacti). As such, it is unsupported at this time.
And a whole lot of other stuff.
- The idea where the plugin was developed is that switches are to be polled by the server in the node the switch eventually connects through. Thing is, determining this is not faultless, and errors creep in once in a while. So, sometimes it’s important to move a device to a different server. In order to avoid losing the data from the original host:
- You can mark a host as disabled, and the build tree function should be able to detect that it is disabled and add a “(D)” to the host name, separating it from the “active” machine and listing them both in the tree.
- There is still a possibility of collisions between Cacti hosts with the same target if it is disabled on both or active on both.
- Best option? Change your tree location for disabled hosts.
- 0.7 (unreleased outside of GitHub) introduced the idea of the “Other” tree. If you have a tree in your list of trees that you need displayed at the end of all of the others, put the tree name in the settings box and it will do so.
- As I was deploying my own collection of Cacti servers, it became apparent that hourly rebuilds of the server provided tree might not be frequent enough, so 0.71 introduced the options for every 15 minutes and every 30 minutes.
- All methods in previous versions suffered from a potential “xID collision bug” when building a tree, which would result in some weirdness when browsing between trees. xID code was therefore rewritten to provide a whole new (hopefully) unique xID at time of tree rendering. The column will no longer exist in the database.
- This does mean that lib/html_tree.php will need to be repatched. See the upgrade instructions.
- An “issue” that could also rear its ugly head would be how a tree might operate differently based on what order the databases were read in, especially for multiple “tree roots” from different servers. Code was included to attempt to avoid that issue and provide a browsing experience that is as similar as possible, even when using “Always” or multiple servers.
- This version introduced the “Client/Server” concept.
- A minimum of one install would be configured as a server, which would pull tree information at a set interval from all Cacti installs and build a Unified Tree, storing the resulting tree in a memory resident table.
- All other Cacti installs would be configured as a “client” – needing only access to the database on the Server, they would pull the Unified Tree from the memory resident table and display it.
- This, especially for a large number of Cacti installs, would be easier to set up: Only a “Server” instance would need access to all the other databases, whereas a “Client” would only need access to the “Server” instances.
- The only downside is that if you configure your tree building interval to an inconveniently long period of time, new equipment may take time to show up on the Unified Tree (even if you’re connected to the Cacti install that has the item in its local tree!).
- Each individual UT install would need access to all other Cacti databases to build the Unified Tree; each UT install was considered independent of every other UT install. It would also build it live, so there was little to no chance (providing hosts were added to their home server’s tree) of missing information.
Found A Bug? Have a Complaint?
The project is currently on Github and you can post your issue there. Or find the post for it on the Cacti Forums I posted and I should become aware of your issue. You could even tack on a comment here. Just pick one – it doesn’t need to be posted on all three …