TFDx at CLUS25 – VIAVI and using AI for PCAP Analysis
by Eric Stewart on Jun.22, 2025, under Networking, Technology
I participated in Tech Field Day Extra at Cisco Live again in 2025. In this post, I’m reviewing a presentation from a company I had never heard of, VIAVI. I of course write these posts not only having been there when being presented to, but also when reviewing the YouTube video so that I can get the information in my brain refreshed. One of VIAVI’s solutions basically looks at flow data and packets and looks for potential issues (performance or security), providing context and guidance towards resolving those issues. A good portion of what I remember from the presentation was about how various AI products responded when asked about a particular PCAP. I will say a hearty thank you to Ward Cobleigh and Chris Greer – the presentation was informative and entertaining.
I’ve been involved in technology long enough to have developed a couple of … tendencies: The tendency to rely on my own eyeballs and the tendency to be skeptical of anything that smells like a “new fad”. Artificial Intelligence (or more accurately, Machine Learning or Large Language Models) is very much the new fad – you’ve got companies doing everything that can to either shoe-horn in “AI” to their products or using marketing speak to call stuff they’ve already been doing all along “AI”. As such, at this time, I’m barely a dilettante when it comes to AI – I have a feel for what it does, but I’m by no means an expert, and I’ve seen enough of its failures to be skeptical every time a company adds “now with AI!” to their product description.
Throwing PCAPs at LLMs
Let’s preface this with a frame of reference: Ward had a 107 packet PCAP between a client and a server that had a key issue: somewhere around packet 70, the client sent in a request to the server, and the server did not respond for 132 seconds. Given that in this day and age even a few seconds can seem too long in a back and forth web session, 132 seconds is “a problem”.
Now, we have this PCAP, we pretty much know what’s wrong in it, and Ward’s next steps were to throw it at some LLMs.
- Claude by Anthropic: Took the PCAP raw. After some prompting, Claude didn’t answer the question, but provided a Python script that supposedly you could use to get your answer …
- Claude Sonnet 4: Said “I don’t see anything wrong, but you should use Wireshark to figure it out yourself.”
- ChatGPT: Doesn’t take PCAPs, wanted json. Gave a lot of metadata, but further prompts had it saying “Everything looks normal aside from a 64ms delay after packet 4.” Ward admitted that at this point he started trying to refine his prompts at this point, trying to get it to actually find the error. Thing is … at the point where the prompt was basically “Hey, what about packet 71?” ChatGPT responded with … well, watch the video. Calling it a hallucination would be generous.
- Microsoft Co-Pilot: Again, needed json, but also would only accept 20 frames (possible licensing issue?). It did, to it’s credit, find the 132 delay within the submitted 20 packets. It gave (after “yes, thank you” prompts) pages upon pages of information as to the “why”, said “look at Wireshark”, and gave ideas as to why the delay might have happened and what to look for.
- Google Gemini 2.5 Pro Preview: Lots of context/metadata, and spotted the delay right off the bat. Provided ideas as to why the delay might have occurred and where to look for diagnosis.
Ward noted a few purpose-built AIs:
- Selector Packet Copilot from selector.ai.
- PacketSafari Copilot from packetsafari.com.
It was very interesting to look at the results, if only to see that “general purpose” LLMs just aren’t built for packet analysis, for the most part. Indeed, ask any five LLMs a question and you’re likely to get five different answers (some closer than others). Why?
Training.
And that was one of my issues with the experiment: Ward gave them a known issue, and in at least one case, basically had to refine his prompts in order to attempt to get it to give him the answer he was looking for … and in that particular case, STILL FAILED. It basically goes to show that an LLM is going to require some training and often times that training may need to be specialized in order to avoid introducing something to the model that might lead it to the wrong conclusion and start offering you stock tips when you just want your packet capture analyzed. Also, you will probably not want to throw a raw packet capture at something – you’ll want to filter it out to the point that the submitted PCAP contains a single tcp conversation. That actually was one of the things Chris indicated – in his job he doesn’t use AI so much as to analyze a PCAP, but rather to give him a command line of Wireshark that filters the pcap for what he’s really looking for.
I think at least in the short term where AI PCAP analysis is headed is more towards the “Explain this problem in this conversation” rather than “Can you find a problem with this conversation?” Too many of the general purpose AI tools didn’t spot the problem to start with. Those that did, did kind of give a good summary of the issue and provided … “useful” … suggestions at to where to look for further analysis. However, anyone with even a little experience in server management would have come to those conclusions well before the AI would have suggested it.
As for those “useful” suggestions? In several cases, there was the feeling of the LLM “throwing a lot of shit at the wall to see what sticks.” I haven’t really played as much with ChatGPT or any of the others (indeed, I keep telling Teams and Outlook to stop asking me to try Co-Pilot), but I get the feeling that this is a lot of LLMs do – vomit a word salad back at you hoping for you to spot something that, maybe with further prompting, you can get a more concise answer from the LLM. And honestly, it’s not much more different from refining your search terms on Google when you don’t quite get what you want from the initial attempts.
I can’t trust it at this point. There’s too much of an opportunity for false results in either direction, and I endeavor those of you throwing questions at particularly LLMs and looking for authoritative answers: Don’t. At the very least, question them no matter how sane the answer seems. Some would use the “Trust But Verify” phrasing, where as I would just use “Verify from multiple other sources” (many of which if you just went directly to would provide you the answer more quickly and accurately).
As Chris indicated (not directly, but this was my takeaway): AI shouldn’t be used by the non-specialist. It should be used to provide the specialist additional context and insight … a specialist that is more ready to spot AI hallucination than someone with less experience.
Also mentioned was that if you have a PCAP, be aware of what’s in it before you start throwing it at just any old LLM – they will store it somewhere and keep it, so there could be proprietary information (or information you would otherwise not want to get out into the world). Apparently PacketSafari had several options for PCAP santization during submission.
VIAVI’s Service
After going over these case studies, Ward showed off an example of VIAVI’s service and how it was able to determine where the problem existed in a SalesForce.com user experience. The example did rely on the data collected from VIAVI’s own equipment in their data center (where they could basically capture packets or collect other flow data with ease). It was interesting that in this case, VIAVI’s dashboard was actually able to provide a packet capture for the specific flow that experienced the issue. And depending on where you looked at the problem from, it may not have been clear
And that’s often two of the challenges with capturing packets:
- Figuring out where you need to do the capture, and
- filtering out the traffic you’re not interested in so that you can focus on a relatively small number of packets so that it’s easier to see the issue.
Sometimes all you need is a high level context, which VIAVI’s dashboard would show; but sometimes, you want the packets.
Questions from my fellow delegates involved:
- Can VIAVI’s system understand WiFi frames? Answer: No, the captures are always from a data center, on the “wire”.
- Can VIAVI’s system take two PCAPs (one submitted from the client’s POV, the other from the DC’s POV) and reconcile them? Answer: Same as above, BUT: Ward suggested that there might be the ability to in the future.
VIAVI deploys “Gigastores” which are essentially packet collectors. They can, if an organization so wished, deploy them at multiple points in a company’s infrastructure and collate the data, but the key to this is that it is VAIVI’s collectors collecting the data for their platform.
When it came to getting packets out of the cloud, there are … “options” … but your mileage (and budget to pull that info) may vary depending on the cloud vendor in question. And some may not be packets but flow logs, and sometimes those flow logs are sampled (or even samples of samples).
VIAVI seems to provide a similar service to CatchPoint but it’s clearer with VIAVI exactly where the data is coming from – actual taps on your network dumping packets into their system. And from the point of view of a network engineer trying to troubleshoot user experience issues, sometimes it’s really nice to be able to click a button and get the actual packet data.
What wasn’t clear in the presentation is where “AI” might be coming in in VIAVI’s solution. Perhaps there’s some kind of logic involved with the packets getting analyzed and the system flagging some flow data as “problematic”, or before you get down to the packet flow, maybe it’s some “proprietary” AI that’s providing you context (coloration, graphical distinctions) for the issue you’re looking into.
Summary
Ward did kind of … “bury the lead?” in his presentations. The information about what different LLMs did with PCAPs was so interesting that that’s what stuck in my head after the presentation more than what VIAVI’s product offered. Now, I can’t thank Ward and Chris enough for presenting this to us – it was very fascinating and memorable, but did kind of eat into the time Ward could take to present VIAVI’s solution and provide a clear “this is why you should buy our product” kind of presentation. That said … there’s no guarantee I would have found it interesting enough to write about if he hadn’t done it the way he did in the first place! That’s kind of what companies are hoping to get out of their presentations at Tech Field Day – get their presentations streamed, and get delegates to write about them later. Thing is … I almost didn’t write about VIAVI’s solution and stopped at talking about the “LLMs and PCAPs” presentation. But, well, here we are. Is VIAVI’s solution the best? Heck, I don’t know – let’s just say that if you’re in the market for a network flow performance monitor and troubleshooting product, put it on the list you’re evaluating.
1 Trackback or Pingback for this entry
- Twitter: Just start your Twitter message with @BotFodder and I'll respond to it when I see it.
- Reply to the post: Register (if you haven't already) on the site, submit your question as a comment to the blog post, and I'll reply as a comment.
June 24th, 2025 on 12:16 pm
[…] TFDx at CLUS25 – VIAVI and using AI for PCAP Analysis […]