SciPy 2025, held in Tacoma, Washington, was an inspiring reminder of just how central Python has become to the future of science, data, and cybersecurity. Across seven days of tutorials, talks, and collaborative sprints, the conference brought together researchers, engineers, and open-source contributors. Together, they showcased the evolving power of Python in everything from machine learning to quantum computing.
With dozens of tracks and workshops spanning hard sciences, data engineering, and applied AI, this year’s conference had something for everyone, and far more than any one person could take in. While I can’t cover it all, I want to share a few standout themes that reflect where the Python ecosystem is headed, and how we in cybersecurity can benefit from the latest technologies.
The first workshop I attended at SciPy 2025 was Explainability in Machine Learning. The workshop’s talks and tutorials emphasized building tools and workflows to interpret complex models. Explainability is all about understanding the “why” behind AI decisions. In terms of cybersecurity, this means things like explaining why a tool flagged a suspicious activity or recommended a specific action. This transparency builds trust, fosters actionable insights, and ensures adherence to compliance requirements and cybersecurity frameworks.
The workshop highlighted SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) as crucial for making complex AI systems more interpretable. SHAP, for example, uses game theory to show how individual features contribute to a decision, while LIME simplifies models around specific data points. These tools make it easier for cybersecurity professionals to unpack even the most sophisticated “black box” systems.
Explainability isn’t just a bonus; it’s essential for meeting regulatory standards and avoiding legal risks. Many regulatory standards mandate that organizations explain automated decisions to individuals. For example, the pharmaceutical industry requires transparency to secure FDA approvals for AI-driven models, specifically showing that those models are unbiased. By prioritizing explainable AI, compliance and security teams can enhance both functionality and accountability, ensuring that their systems are as transparent as they are effective.
Another SciPy 2025 highlight was the growing maturity of Ibis, a framework that abstracts data transformation logic across multiple backends. As a replacement for Pandas, you can instead use DuckDB or BigQuery, to transform code once and deploy it across platforms with minimal friction. A tutorial titled “Simplify and speed pipelines using DuckDB with Pandas, Polars, and Ibis” demonstrated how this abstraction helps unify disparate data workflows.
Traditional machine learning flows used to rely on extracting the data out of a database, putting the data into a dataframe, and training the model based on that dataframe in memory. With Ibis you can skip the dataframe and perform most of the analysis by training directly from the database.
For security analysts or researchers working across cloud and local environments, this kind of flexibility can dramatically speed up pipelines while keeping codebases clean and portable.
Technology like Ibis is important to follow, as it could become the next standard in data engineering workloads. If anything, it is a solid improvement over standard Pandas data frames. This paradigm shift will mitigate some previous security concerns, since it limits data in motion. However, it could raise additional risks, such as giving AI-powered systems direct database access.
Graph data also had its moment in the spotlight at SciPy 2025 with updated best practices and tutorials on NetworkX. Graphs are a data structure that is formed in terms of nouns and verbs. Graphs have nodes (nouns) that connect to other nodes with edges (verbs). For example “Keegan (node) is friends (edge) with Mary (noun)” would be an example of a very small graph. Graphs excel at analyzing complex relationships and are used in a variety of industries, including cybersecurity.
Whether analyzing dependency graphs, modeling social networks, or mapping interconnected systems in a variety of industries, NetworkX continues to be an indispensable tool for graph analytics. The sessions dove deep into topics like centrality, pathfinding, and community detection.
These concepts are useful in scientific research, but also have applicability in vulnerability chaining and network segmentation in cloud security. By representing entities like devices, users, or IP addresses as nodes and their interactions as edges, NetworkX enables the visualization and analysis of network traffic, threat patterns, and vulnerabilities. These techniques help pinpoint critical nodes, detect anomalies, and uncover hidden relationships within networks, which can drastically accelerate the detection of anomalous or suspicious behaviors.
As seen so far, Large Language Models (LLMs) were a major draw. But what made the SciPy treatment unique was its practical, grounded approach to real-world deployment. Talks focused on building reliable LLM applications, covering topics like prompt engineering, retrieval-augmented generation (RAG), safety evaluation, and latency optimization.
At the same time, several sessions raised important concerns around LLMs in science, highlighting risks like hallucinated citations, biased outputs, and lack of traceability. These discussions reinforced the need for transparency, human-in-the-loop validation, and reproducibility when integrating LLMs into research or production pipelines.
For me personally, the highlight of the event was a talk by Mihai Maruseac on the OpenSSF Model Signing project. Open Source Model Signing (OMS) can be used to verify that a model hasn’t been tampered. AI Security is an open frontier and security best practices are still evolving. If you are using open source models, make sure you stick with verified sources. You’ll also want to take extra precautions by securing both scans and models as much as possible. If you are a machine learning engineer, you should be signing your models before deployment. The OpenSSF has AI security SIG that meets bimonthly on Zoom. I recommend attending that group if you want deeper insights into some of the most cutting edge advances in AI security.
There’s no way to cover every talk, poster, or hallway conversation, but I encourage everyone to check out the recorded sessions on YouTube.
Whether you’re a researcher, a software engineer, a data scientist, or a security practitioner, there’s something at SciPy for you. The conference continues to be a place where code, science, and community converge. I’m already looking forward to what 2026 will bring. Keep checking GuidePoint Security for emerging technologies and trends from your trusted cybersecurity partner.
Interested in exploring the role of AI in security? Or do you have AI-driven applications and workloads that need to be secured? Either way, we can help. Contact GuidePoint Security to discuss your unique goals and needs.
Keegan Justis
Keegan Justis is a seasoned professional renowned for his expertise in cloud native services and DevSecOps practices tailored for AI-focused SaaS startups. Keegan excels in designing resilient cloud infrastructures that support the demands of AI applications while maintaining rigorous security standards. Beyond technology, Keegan enjoys reading, staying active with workouts, and cherishing time with his wife.