Changelog

Follow up on the latest improvements and updates.

RSS

new

AuraDB Virtual Dedicated Cloud

AuraDB Professional

AuraDB Free

AuraDB Business Critical

neo4j-cli has arrived!

neo4j-cli is here, and it’s great for AI agents who need to use Neo4j
Starting out as a Labs project, neo4j-cli is the first step on the path to a unified CLI for Neo4j.
image
What it does
neo4j-cli brings together everything a developer (DX), a human (UX), and an agent (AX) needs to work with Neo4j from the terminal:
• Aura management: create instances, manage tenants, handle snapshots, CMEK, Graph Analytics, and Data API. Everything you used to do in the console, now scriptable and automatable.
• Query: run Cypher directly against any Neo4j database, Aura or self-managed, right from the CLI, with built-in read/write safeguards.
• Built-in vector embedding: compute embeddings inline using OpenAI, Ollama, or Hugging Face, and feed them straight into Cypher queries. No extra tooling needed.
• Credential management: securely store and switch between database and embedding provider credentials.
Why this matters for AI
This CLI is built with the agent experience firmly in mind. It ships with skill bundles, structured instructions that let AI coding assistants like Claude Code and GitHub Copilot use it natively. That means our customers’ AI agents can provision Aura instances, run graph queries, and do vector search from within their AI workflows. This is a significant capability for Neo4j as an AI-native graph platform.
So what should you do?
Try it yourself: https://neo4j.sh
Install takes 30 seconds.

new

AuraDB Virtual Dedicated Cloud

AuraDB Professional

AuraDB Business Critical

🚀 High Memory 2TB Instances now available for AuraDB on GCP

We’re excited to announce that High Memory Instances are now available for AuraDB Business Critical and AuraDB Virtual Dedicated Cloud customers running on GCP and transacting via pre-paid terms.
This update enables larger AuraDB deployments on GCP, with new configurations available on request, up to approximately 2TB RAM and 5TB storage.
Screenshot 2026-05-08 at 16
We’re also expanding standard storage options. Eligible customers can now configure higher storage sizes for existing AuraDB BC & VDC instances running on GCP, within supported memory-to-storage ratios. For example:
  • 256GB instances can now support up to ~4TB storage
  • 384GB instances can now support up to ~5TB storage
  • 512GB instances can now support up to ~5TB storage
These expanded storage options are available directly through standard configuration options in Aura Console and API.
Because High Memory Instances depend on regional cloud capacity, access to those larger memory configurations is handled through a request flow. Eligible customers will see a
Request Larger Instance
option in Aura Console. Once submitted, our team will review your requirements, confirm regional availability, and guide you through next steps.
GCP is the first cloud provider supported for High Memory Instances, with support for additional cloud providers planned later this year.
If you’re interested in running larger workloads in AuraDB, please contact your Account Manager or request access directly from Aura Console.

new

AuraDB Virtual Dedicated Cloud

AuraDB Professional

AuraDS Professional

AuraDS Enterprise

AuraDB Business Critical

Tidy Up Your Org with Automated User Pruning 🧹

Keep your security tight and your member list tidy without the manual effort. Our new
Inactive Organization Member Pruning
feature lets you automate the clean-up for users who no longer require access to your Aura Organization.
The Highlights:
  • Set Your Rules: Define a period (
    30
    365
    days) after which inactive members are automatically removed from the Org.
  • Fair Warnings: Aura optionally sends up to
    three email notifications
    to users before they lose access, giving them plenty of time to log back in.
  • Activity Defined: Any action in the Console or via the Aura API resets the inactivity timer.
  • Built-in Safety: We’ll never remove the last Owner or Project Admin, and you can mark specific VIPs as Exempt.
  • Easy Recovery: Users automatically pruned can be re-invited to the org if they need to regain access.
Ready to declutter? Head to the Organization Security page to configure your policy. For further reading, our documentation page on inactive organization member pruning gives additional details.
(Note: This feature is unavailable for organizations using SSO as their exclusive login method. We recommend using the SSO provider's user management tools in these cases)

new

AuraDB Virtual Dedicated Cloud

AuraDB Professional

AuraDB Business Critical

Aura Graph Analytics

Project-level Machine Learning Model Persistence in Aura Graph Analytics

We are excited to announce a major upgrade to how machine learning models are saved in Aura Graph Analytics. You can now persistently store and manage trained machine learning models in the Model Catalog at the project level!
Previously, models created within Aura Graph Analytics (AGA) were ephemeral and limited to the session in which they were created, while existing persistence in AuraDS was limited to the instance or database level. With this update, your models are stored safely in a persistent catalog and can be reused in later sessions across the project, even after your original Graph Data Science (GDS) session has ended.
Scope and Availability 🌐
Models are now securely scoped to your environment:
  • Project-wide access
    : Models can be accessed by GDS Sessions created within the same Aura project.
  • Provider & Region
    : Models are available to sessions operating in the same cloud provider and region where they were stored.
  • User Ownership
    : Access to a model is currently limited to the specific user who originally trained and saved it (i.e., the owner of the session that stored the model initially). (
    Note: Sharing models with other users via
    gds.model.publish
    is coming soon!
    )
How to use it 🛠️
You can start taking advantage of the project-level Model Catalog right away through:
  • Cypher
    : Manage your models directly using standard Cypher model catalog operations (
    note: access via the Cypher API requires an additional
    sessionName
    parameter
    ).
  • GDS Python Client
    : Available natively for users on version 1.21 and newer.
For complete details on supported model catalog operations, check out the official documentation.

new

AuraDB Virtual Dedicated Cloud

AuraDB Professional

AuraDB Free

AuraDS Professional

AuraDS Enterprise

AuraDB Business Critical

Neo4j Fleet Manager

Aura Console User Password Requirements

Boosting Your Aura Security 🔐
To keep your graphs under lock and key, we’re rolling out a small update to our password policy and a friendly reminder about your security toolkit.
What’s Changing?
Starting today, the minimum password length for the Aura Console is increasing to
16
characters. This will apply to any new users, or any users that reset their password.
Why the change?
While "
Gr4ph123
" was a solid start, modern security standards have evolved. Experts such as NIST now agree that password length, combined with always using a unique password, are the best defense against attacks. A longer password creates exponentially more entropy, making it much harder for bad actors to guess, while remaining easier for humans to remember if you use a simple passphrase (e.g., Correct-Horse-Battery-Staple).
Level Up with MFA 📱
A strong password is a great first layer, but why stop there? We highly recommend enabling Authenticator-based TOTP MFA (Time-based One-Time Password) as this is ultimately the best way to protect your Aura account.
How to do it:
  1. Log in to the Aura Console using email/password.
  2. Go to Account > Settings > Preferences > Security.
  3. Enable Multi-Factor Authentication (MFA).
  4. Follow the MFA set up steps.
If you are an Org admin, you can make Authenticator-based MFA mandatory for all of your org users in the Organization Security Settings. It only takes a minute, but it gives you peace of mind that lasts much longer.
Action Items for You
  • Update your password: If you're using a short password it is time to update it. The next time you change your password, you’ll be prompted to meet the new 16-character requirement.
  • Enable MFA: If you haven’t already, grab your favorite authenticator app (like Google Authenticator or Authy) and link it to your account.
Pro Tip: Using a password manager makes meeting this 16-character requirement a breeze. Let the robots do the remembering so you can focus on the querying!

new

AuraDB Virtual Dedicated Cloud

AuraDB Professional

AuraDB Free

AuraDS Professional

Dashboards

Dashboard actions now available

The Neo4j Dashboards tool now has
Actions
available for all Aura tiers:
  • Build interactive filtering based on table cells
  • Create checklists and let users select parameters dynamically
  • Add in drilldowns to enable rich dashboarding applications
image
Check out the documentation to learn how to add actions into your own dashboards.

new

AuraDB Virtual Dedicated Cloud

AuraDB Professional

AuraDB Free

AuraDS Professional

AuraDS Enterprise

AuraDB Business Critical

Neo4j Aura Database April Release

The Neo4j Aura April release is now rolling out—starting with AuraDB Free and gradually extending to higher-tier instances.
This release focuses on strengthening GQL compliance, expanding GenAI capabilities with new AI token functions, and improving memory efficiency for complex queries.
🚀 New Cypher 25 Features (GQL Standards)
We are continuing our commitment to GQL (Graph Query Language) standards by introducing new predicates and statements that align Cypher with international standards:
  • IS LABELED / IS NOT LABELED
    : New GQL-compliant predicates to test for node labels (equivalent to the existing
    IS
    and
    IS NOT
    syntax). More details on Cypher Manual.
  • FOR Statement
    : You can now use the GQL
    FOR
    statement to extract individual rows from a list (the GQL equivalent of
    UNWIND
    ).
🤖 New AI Token Handling Functions
To further support GenAI and LLM integration workflows, we’ve added two new functions to help manage context windows and token limits:
  • ai.text.chunkByTokenLimit
    : Automatically split input strings into lists while respecting specific token limits. Link to Docs.
  • ai.text.countToken
    : Easily estimate the token count of a given string directly within your query. Link to Docs.
📈 Performance & Infrastructure Improvements
  • Memory Optimization
    : Significant reduction in memory usage for the
    SHORTEST
    statement and improved precision for memory tracking within the Top operator.
  • Parallel runtime
    memory tracking enhancements: Improved the precision of memory tracking within the Top operation.
  • Detailed
    version reporting
    in query output: EXPLAIN and PROFILE output now consistently includes the underlying Neo4j version, reporting down to the point release.
  • Drivers
    : Updated
    neo4j-java-driver
    and
    netty-bom
    .
🛠️ Bug Fixes
  • Vector Indexing
    : Fixed an issue where quantized data in
    vector-3.0
    indexes wasn’t being utilized at query time. This fix reduces memory usage for vector searches without requiring an index rebuild.
  • RBAC/PBAC
    : Resolved an error that occurred when running duplicate Property-Based Access Control commands containing temporal functions.
  • Show Databases
    : Fixed an incorrect error message when passing a parameter with the incorrect type to SHOW DATABASES.
  • Stability
    : Fixed a rare record format bug related to iterating relationships on nodes that concurrently become dense.
  • Pathfinding
    : Fixed a bug where
    SHORTEST
    would occasionally omit valid paths when multiple targets were present.
For full details of all updates and fixes in this release, please visit: Release Notes: Neo4j Aura Database – April 2026.
Great news if you use Microsoft Fabric in your organization. You can now export the results from your graph analysis back to the Lakehouse.
image
Feature
  • Export is only available for Graph Datasets created from data in Fabric.
  • The import model used to transform the Lakehouse-to-graph governs which nodes and relationship types are exported.
  • Choose between updating the original source tables OR creating new results tables.
  • Documentation is available here.
This feature is not available for Graph Datasets created by connecting an existing Aura database to Fabric
What's new
We've added a
'Neo4j Aura' button
in Neo4j Desktop v2.1.4, for migrating local databases and seamlessly connecting to Aura tools.
See your running Desktop instances inside the Aura console and use cloud-only features and AI-assistants. Look for the 'From Neo4j Desktop' instance label in the self-managed tab. Switch back to Desktop to manage local instances. To use AI-assistants with Desktop instances make sure you install the APOC plugin, find it in the local instance (...) menu > 'Plugins'.
Screenshot 2026-04-16 at 13
Screenshot 2026-04-16 at 13
Other core changes
  • Swapped to the Azul JRE, instead of JDK, to save space
  • Added support for accepting self-signed certificates for remote connections
  • Added URI input validation to various connection modals
  • Added a forgotten password reset, find it in the local instance (...) menu > 'Security'
Screenshot 2026-04-16 at 13
  • Fixed a token refresh 'API Error' issue on the instances page
  • Fixed an instance status issue in the instance overview panel
Query
  • Added support for user controlled auto-committing transactions via the :auto command
Explore
  • Fixed issue where locked Search Phrase did not update with changes to Cypher query
  • Numerous other bug fixes and enhancements
Import
  • Added support for composite keys on node IDs

new

AuraDB Virtual Dedicated Cloud

AuraDB Professional

AuraDB Free

AuraDB Business Critical

Explore

Multiple Explore Scenes

Users of Explore can now store multiple Scenes! Up to 3 Scenes can be saved for a Free Instance, 5 for Pro Instances, while Enterprise Instances benefit from an unlimited number of Scenes.
Note that at this time, Scene data can only be persisted until an Instance is paused.
Additionally, Perspectives and Scenes are now stored online, meaning that they can be accessed from different workstations and web browsers. Limits on the number of Perspectives that can be stored have also been introduced, with Free Instances limited to only the Default Perspective, Pro Instances having up to 5 Perspectives, and no limits on Enterprise Instances. Any existing Perspectives will automatically be migrated to online storage.
Load More