#21: ZeroEyes: The AI Weapon Detection System Protecting Public Safety - Sam Alaimo
Back in 2018, in the wake of the Parkland High School mass shooting,
a group of former Navy SEALs banded together and launched a company
called Zero Eyes. Their mission is
to prevent and mitigate these types of violent
events. And the means that they're going about achieving that mission is
through technology. Here we're talking about artificial intelligence, specifically
object detection algorithms. So they went and they trained
algorithms designed to detect the presence of
guns. Then they combine those algorithms, they
run those algorithms on CCTV cameras. to
flag specific instances of person
with one of those weapons. So my guest
today is Tim Solcer, who is one of ZeroEye's co-founders
and the company's CTO. And we talked beyond just
how the models work and how they were made to work and validated to
work. but into the impacts that they're seeing on
the ground, the impact that they're having, the ways in
which this level of automated security is
changing operations on the ground, and where it all could go.
So, if you are new to the pod, I
am your host, Ian Kreitzberg. If you are not new, you
knew that already, and thanks for being here. This is The
Deep View Conversations. Tim,
thanks so much for joining me today. I appreciate you having me on. Yeah. So,
you know, we've connected before. It's been a
couple of years, which is kind of crazy. But ZeroEyes has been around,
you know, even before that. You guys got your start in 2018. Let's
just start there. And then there's plenty of other stuff to
dive into. But with the kind of foundational
So we got started in 2018 shortly after the shooting in
Parkland, Florida. And that shooting was particularly
terrible because the shooter was in a stairwell underneath
a security camera for, I think, three or five minutes before the
first shot was fired. So we looked at that
and we essentially had this idea of, you know, if
somebody was watching that camera, they would have been able to stop that shooting.
And why aren't we watching cameras? It's basically because there's not enough attention
span and people hours to be able to have eyes on every camera everywhere.
And at the same time, AI was
progressing, object detection was becoming a reality. I'd
worked with some computer vision in a previous startup, and
it was a natural progression to be able to say, you know, if we
can detect any number of objects, faces, dogs,
cats, why can't we detect guns on security cameras and use
that as a means to provide proactive
situational awareness during a shooting. Early days of
ZeroEyes, we got started basically just looking for, can
we build a model to detect guns? That was the first MVP.
And we started off by collecting images from Google
images, web scraping images, open image data set, and
basically trying to build a model that would
detect guns. And we tested on videos like clips
from The Matrix and, you know, random images that we scraped
from the web. We found out really quickly that Those
types of models that are trained off web images
are not easily transferable or generalizable to
security cameras. And so we deployed our first model on a security camera
and the performance was terrible. The images didn't
match up to what the AI was trained on. So the
next step from there was we went out and bought some cheap
security cameras on Amazon and hung them up in our
CEO's backyard, which is where our office was at the time. We were working out
of his basement. And we
found out really quickly that generating our own data in
a realistic environment using a real camera, a
real sensor, was the trick that we needed. And
so we invested most of our early time in the company into
just building a quality model and building an organic data set.
That's the perfect jumping off point, right? I mean, we
hear about this all the time, kind of no matter the application, if you're talking about,
you know, studying whales off the Pacific coast, or in
your case, identifying weapons, it all comes down to data,
the quantity of the data and the quality of the data. So I Tell
me more about that. How much
data did you have to collect? And how did
you think about coming up with different ways of varying
the types of data that you're collecting to making sure
you're getting different angles of, I guess, as great
a variety of weapons as you could kind of conceive of? Yeah,
Being a co-founder with four former Navy SEALs, we
never had a shortage of weapons to use as examples. But
exactly that, garbage in equals garbage out.
And we view our data set as probably the most important thing in the
company right now. In the early days, we never had enough data.
That was 100% of our problem. And
we spent a lot of time traveling to different environments, using
security cameras to collect data and with
different backgrounds, different lighting conditions. And that was
really the challenge that we identified early on was security
cameras are mounted in all different types of environments. Security
camera quality, video quality from cameras varies greatly from
camera to camera and manufacturer to manufacturer. So
it was really important for us to have representative examples of
all of the types of environments that we really wanted to detect guns in.
It started in Mike's backyard and progressed to local schools.
We spent every weekend for months in
the early days traveling to different schools and using their
camera systems to record weapon data. And
then the next step from there was we really
wanted schools to allow us to record data when there was actual
people in the camera views. But
schools were not very willing to have us walk around during
the daytime with guns, with active students in the hallways.
So we then came up with the idea, well, if we can't
collect this data in real time, how can we use the customer
backgrounds to build in some generalization
into the model? And our solution was to build a green
screen AI lab, essentially. So we
built out a 5,000 square foot AI lab with
full green screen walls. And we hung about
100 security cameras throughout the warehouse.
So we were able to walk around with ZeroEyes employees with guns
and then overlay our customer backgrounds behind them to
give that context in the scene that we weren't able
to record in real life. And that's where
we've been for the last few years. But there's a lot of
trends around synthetic data, synthetic data
generation that We'll probably obsolete that at
some point. But we've
invested a lot as a company in having a really high
quality data set that represents as many possible scenarios
So beyond that, you talked a little bit about the algorithms, and
I just want to nail a little deeper into
that. You said back in 2018, we were seeing
advancements in object detection, which is basically, that's
the core thing behind what enables your
technology. That's also one
of the major algorithms in self-driving cars. Um, so
it talked to me about beyond the dataset,
building the system itself, the algorithms, um,
how do you validate that? What was the process
like? And, uh, you know, you're building from scratch. You're
not piggybacking off of other systems. It sounds like,
so I imagine if the data collection process was
intensive, the algorithm construction process.
Yeah, there's a few different steps to it. Because when you think about the
entire video pipeline, the processes of the data, there's many different steps
from simply just decoding the video, then passing
it through an inference engine, and then object tracking. So
it's changed quite a bit over the years. But in the early days, we started with
probably the best object detection technology at the time was fast R-CNN or
faster R-CNN models. We were just starting
to see YOLO models be released, which I think we're on
maybe the eighth or tenth iteration of YOLO models at this point. And
it's kind of diverging quite a bit. But in
the early days, it really affected our hardware processing. I
mean, we were trying to solve this problem of
being able to process real-time video at
the customer's site. Because customers are sensitive to
their security camera data being sent
somewhere else. So we identified really
quickly we have to do this with GPUs. We have to do it with GPUs on
premise. And so the model selection came down to what
model gives us the best balance of accuracy and compute
efficiency. For us, faster RCNN
models were the highest performing at that time. They
were definitely more computationally heavy, which
meant that we couldn't load as many cameras per GPU as we wanted to, which
affected our economics. But over time, the
great thing about research and academia is that they're constantly putting
out new stuff. And for the most part, it's open source. So
we're able to use open source off-the-shelf
models, which has progressed to, I think we're
using Ultralytics models today, YOLOv5
or 8. But with
those advancements in model algorithms, that's
brought with it increases in accuracy. As we're increasing the
quality of our data set, the models, the algorithms themselves
are getting better, and the speed is getting better. So we're able
to run models on more cameras, higher
resolutions, higher frame rates, which all of
those things turn into better detection performance for us. And
we expect that trend to continue in the future. Obviously,
we're still talking about object detection models today, which is kind
of like single frame analysis, but with
the potential for large language models and
vision transformers and things like that, we see a lot
of advancement in the future, not just on the detection side, but also on
the context side. What can we communicate about the
image itself to our customers to provide the best situational intelligence?
Hmm. Yeah. I'm glad you brought that up. Cause I was going to ask you about that. Uh,
but the, the, that big area of advancement that
we've seen over the past couple of years, it's all kind of centered around large
language models, uh, transformer architecture, the vision models
that you, that you mentioned. Um, so that, that
is something that you're exploring early
Yeah, it's something that we see as a huge opportunity because today
we're providing this situational awareness of gun
or no gun. There's either a gun in the image or there's not. And
then we have a human in the loop that allows us to maximize
our detection performance while eliminating false positives. So
customers will never receive a false positive. But
the potential with large language models is now you can extract a little
bit more context from the scene. And you can also provide a
little bit more dynamic inputs. So looking for
more dynamic scenes or different objects in different configurations
that wouldn't be realistic with a single model
We'll stick on that side for a second. You mentioned something else that I want to get into in
a minute, but that'll keep. On
the language model side, the idea of greater context on the images that you're talking
about, right? That's really interesting, because like you said, right
now, gun or no gun, that's kind of it.
Other context, I mean, in a security situation
like that, any added piece of information is
probably going to be very nice to have. But
I wonder, as you start experimenting with
deploying or incorporating additional
models of different types, I
wonder how that changes your validation process. Because
the gun or no gun thing is a little more of a straightforward thing,
it's a different type of model. Language models have
reliability issues and
I wonder how much that might be a challenge or if
there's a way maybe through specific data sets,
small language models, other techniques and safeguards to
Well, I think there's two parts of that. One is Just
like all good security comes in layers, I think the same thing is going to
be applicable for IAI. In
our case, I think there'll be multiple layers of AI models
in the future that process all of the detections that we generate,
but also kind of highlights the value of that human in the loop. If
we were to send a detection directly through to
the end customer and provide some layer of analysis from a
large language model, it's very possible that that's incorrect and
that it could confuse or cause some sort of
issue in the response, the critical incident response. And
today we have a human in the loop that essentially does the same thing, but
they're also capable of performing other actions. So during
a critical scenario, when we see a gun and click dispatch, that's
also initiating a call to 911 dispatch
that's closest to the camera location. And someone on
our team is also getting on the phone with points of contact at the
customer to be able to verbally communicate these things. I
don't see the value of having a human in the loop ever being fully
automated or deprecated because
the value during a critical scenario is we're able to
communicate directly to that POC and not rely on
And so I, I guess what you were just talking about there, that that's the point that
I wanted to get to and make sure we drill down on. Um, cause we've
been kind of talking around it for anyone who's not familiar, um,
with what you guys do, right. It's, it's. Object
detection designed for, uh, you know, to, to scan for
weaponry and, uh, CCTV footage. That's
all linked up with warning systems. You have teams of people
that review flags from the system, but I
would love if you could kind of walk me through, if I was a point of contact, if
I was one of your customers and we put in however
many dozens of cameras we had and
something happens, your camera picks up a
flag, can you just walk me through what
that process looks like from kind of inception of the
model says we might have something here and whatever
I'll take a step back and just talk about it from the entire value chain, because
you're absolutely right. We didn't actually cover that in the beginning. We're
connecting to real-time security cameras, all of the customer's existing security
infrastructure, and we're pulling a RTSP
feed, a real-time streaming protocol feed, from that camera
and running an AI on the video feed frame by frame that's
looking for the presence of a gun. at the point at which our
AI says, it's pretty confident that this object is a gun,
it's going to send that detection to a human in the loop. And the
human in the loop is in our ZOC, our Zero-wise Operation
Center. We have one located just outside of Philadelphia here,
and then another in Honolulu, Hawaii. So we're
able to make use of those time zone differences. The
operators in the ZOC are performing an analysis to
verify whether or not the detection has a real gun in it.
So they see a real gun and they click dispatch. That
dispatch button initiates all of our third-party alerting
methods. So we have a dashboard and
a mobile app ourselves, but we also integrate with
local 911. We integrate with other third-party services
in order to get the detection information to
the customer in the best way that they can utilize it.
So from dispatch, our
ZOC operators will get connected to local 911. They'll
communicate and verify that a local 911 has access
to the alert image, and they're able to generate an
incident based on that. But at the same time, we're also calling
points of contact at the customer site. And what we're trying
to communicate is basically what we're visualizing
on our side. So the
ZOC has a specific script that they stick to, but it's essentially something
like we have an alert, a zeroized weapon
detection alert of what appears to be a
person brandishing a rifle in this setting. That
is really the initiation for an incident
on the customer side. Customers have all different standard operating procedures
of how they want to be notified and what they do following a notification. But
I do foresee more automation in the future around that, where
today we're basically handing off situational awareness to
the customer and allowing them to respond. But
I see a lot of benefit in ZeroEyes being involved
in that response in some way. whether it's something as
simple as providing expertise in
how a school or a commercial office building should respond
to that threat, all the way through to initiating other
forms of response, like dispatching alerts
into access control systems, providing
one-click access control capabilities, so
that customers can actually take action from the ZeroEyes Alert. Um,
Kind of in line with what you were just mentioning about being more involved in
the action. I wonder how far something like this goes,
you know, and, and right. And thinking about where you are right
now, where you might want to go as you keep growing,
since you're analyzing real time footage. Right?
Like that's kind of the whole fundamental crux of
what you're dealing with. If there was an
attack, you know, a person brandishing the rifle, or maybe the person's in
a hallway, they're locked in a room, they're moving. Would
you be able to track the movement of these
kinds of assailants? And in an action stage,
beyond just saying, there's a person here, do something about it,
could you physically or do you physically you
know, speak to local police
officers on the phone and say, okay, this is exactly where the
person is. They're in this hallway. They are doing this right now. They are
moving. They just turned left kind of like an overwatch thing.
Is that, is that a function at all of what you
So yes and no. Uh, we aren't, uh, we
don't have visibility into the customer's camera systems to the extent that
we could actually provide that video or watch a
shooter. walk from camera to camera, and we do that for privacy reasons. Basically,
our AI is the only piece that has access to the live video. But
that being said, you're highlighting on a really key point of our value. And
that is, even after the first shot is fired and we know there's a
gunman on site, we don't know where
he is at any specific point, or the first responders
don't know. And so every time that shooter walks in
front of a new camera that's running zero-wise, and we're able to get a
new detection, we are dispatching that and updating law
enforcement. We do have some local police
departments that use our mobile app and they get the alerts directly to
the mobile app for each individual officer. But
the general feedback we've gotten is that they would prefer to get that
information over the radio from dispatch. And so
our biggest point of contact is that 911 dispatch center. And
our integration with a company called Rapid SOS gives
us the ability to automatically be connected to the
closest 911 dispatch center to where our
customer is located based on their camera location. So that
puts us in direct phone contact with 911 PSAPs,
public safety answering points, and those PSAPs are the ones that
And so because of that integration, any officers on
the ground, as well as points of contact at
whatever building or location you're
dealing with, they know the flag was picked
up by, you know, camera 22B
and hallway seven on the second floor or whatever it is.
So they have a deeper level of situational awareness beyond
Definitely. You also highlight a really important point,
which is a unified mapping interface. Because
if our customer is looking at a map, and we're looking at a different map,
and police responding are looking at a third different map, it
becomes really confusing trying to communicate about landmarks and
where things are located. And so having a unified mapping
interface, which we partner with a company called CRG, Critical
Response Group, and they produce really high
quality interior maps. When
our customer has a CRG map, they're able to provide that to us, and
we are able to overlay that on our dispatch map so that both
the customer and our ZOC operation team is
looking at the same map. And wherever possible, we also include
the local 9-1-1 PSAPs in that when we send our alerts, so
Now, you mentioned for the ZOC, for the operators that
review these flags, that no
false positives, I think is what you said. How are these
operators trained? And I guess, what's
their protocol? in terms of validating that
whatever has been flagged is real or
is not a concern. And I mean, I guess taking
that a step further, if you get a flag that turns out not to be a
gun, but maybe it's something a little weird, is that
I would say the operators back in the ZOC have really difficult jobs.
They are on a daily basis, they see
guns on a daily basis. A lot of times it's
toy guns or ROTC rifles, but they're constantly required
to make split-second decisions, which
is why we focused on hiring veterans and former
law enforcement into the ZOC, because those individuals
back there both have the training and understanding what
a gun looks like and how to respond to a critical scenario, but
also as a part of their former careers as
military and law enforcement, A lot of times they were doing very
similar jobs. They either had watch posts or
they were performing some sort of surveillance where they had to
do basically the same thing and
be able to, during a critical scenario, remain
calm and calmly communicate critical
pieces of information that could be happening at the same time people
are actively under threat. So we've
had a lot of success hiring former law enforcement,
former military personnel into the ZOC. And like
I said, they have a really difficult job. On a daily basis,
they have to understand unique standard operating
procedures from different customers and be able to communicate to the
customer in the best way that the customer needs that information. And
how that translates into real life, I mean, We've
made dozens arrests at this point of people
that had guns in areas that they shouldn't have. But
we also communicate with customers on a daily basis about non-lethal
gun threats. And like I said, every customer has their own SOPs. For
instance, some customers want us to disregard any
toy gun detections. Other customers still want to know about them.
And that extends to a wide array of
scenarios that include all different types of guns
being presented from law enforcement or known trainings,
things like that. So as much as possible, we
try to communicate with our customers so that we're aware of any
reasons that there should be a gun on premise, but generally our customers want
to know regardless of what type of gun it is or what the scenario is.
And that's an awesome point of communication for the ZOC.
So even if we see something that we believe is non-threatening, it's
easy enough just to call up the customer and have that conversation and
Yeah, I'd want to know. I'd want to know. You
mentioned that your work, I
guess, has led to a bunch of arrests of
people who had weapons in places that they shouldn't have had them. I'm
wondering what else you can tell me about the
results that you've seen. Like, we've talked about you've launched
in 2018, you've been in operation, you know, and growing
into new places. what
kind of situations have you come across and
Yeah. So since 2018, we've expanded
to, I think we're up to 47 States. We're spread out throughout
the entire country on K-12, um, public
transit, uh, on the commercial side, we're in, um,
big box retail and logistics centers and things like
that. Um, We
haven't had that stereotypical detection
and arrest of somebody that appeared to be, you know, entering a school to
commit a mass shooting. But we've, you
know, in early days, maybe we would see a gun once a month. We're absolutely
seeing guns, dozens of guns on a daily basis in areas that
I never thought we would see guns. And so the
performance of the model itself has proven itself as
we scale. We're generating detections all the time.
And in the scenarios that we've run with local law enforcement and
law enforcement at customer sites, we've been able to identify that the
reduction in response time is considerable.
So comparing response times
without zeroized alerts, First responders
are basically showing up at a school and not knowing where the
shooter is located. So they enter that school, it
could take them five to 15 minutes to clear the school
and locate the actual shooter. And there's been plenty of real life examples,
like Uvalde, where there's been serious
challenges around that. So in
the testing that we've done, we've been able to considerably reduce response
time and direct first responders exactly where in
the building they should be located. It's
something as simple as if first responders show up to the wrong side
of the building and they enter the wrong door, that could mean 15 minutes
in lost response time. Wherever
possible, we're really focused on reducing that response time to get
You mentioned you're seeing dozens of guns every
day in places that you wouldn't expect. Are
these, what are those kinds
of situations? Is that, are people just kind of caring and
they're walking around and they're caring? Is there intent to
violence? Do those always lead to kind of reactions in
terms of law enforcement, but in terms of, you know, the customer that
you're securing, is there responses to that or is it, oh no,
he's okay. I mean, obviously each
Yeah, short answer is it totally depends. But
I would say that for the most part, fake guns are
starting to look more and more like real guns. If you pick up an airsoft rifle at
your local Walmart, that airsoft rifle is almost indistinguishable
from an actual AR. And even
more so when students will
paint the tips, paint the orange tips black or remove the orange
tips. So a lot of times we
don't necessarily know if the gun is a fake
gun or a real gun, and we have to essentially treat it as though
it's a real gun. So I would say that's probably
one of the more common scenarios that we see. Also, a lot of times we
see people using objects
like cell phones and pointing them at each other as though they're real guns.
There's a very popular TikTok challenge that's been going around the last few
years called Senior Assassin, which is about the
most insensitive thing that I think you could do in nowadays climate. But
students are bringing airsoft rifles or fake pistols,
or in some cases, real guns to school in order to perform
these mock senior assassinations and post them
online. I couldn't tell you how many detections we've gotten that
were similar to that, where we see students basically filming
themselves pointing guns at each other in
order to fulfill this TikTok challenge. So
in cases like that, we always respond to the customer. We let
them know what we're seeing. We try
to communicate as much detail about the scene as possible, but ultimately
it's on the customer to respond and execute
Senior assassins, huh? Yeah. Oh,
man. But and so you mentioned, I
just want to, you know, clarify, right, like you haven't been
involved in or encountered directly
any of the kind of mass violence
Luckily, no, uh, I think about this sometimes and you
know, obviously if an event does happen, I want to be there. I
want to be on those cameras to be able to detect it. Uh,
but thankfully we've, we've not been involved in a mass shooting
event. Um, I think it's only a matter of time though. The
more cameras we're on, the more guns we're going to see, the more coverage we're
going to have. And, um, I just pray that when we're in
that situation, uh, that we're able to make the detection before
Mm hmm. Now,
you mentioned that you're in 47 states now,
and you have a ZOC in Pennsylvania
and in Hawaii. So you cover the
different time zones. They're operational all the time. How
does the ZOC scale in kind with
the scale of your emplacements and your
model? Like how many more people do you need for each
Today, we're able to monitor the entire United States from just those two operating
centers. I anticipate in
the future we're going to expand and build operating centers in other locations, but
today we're able to just basically follow a model that says when we add
X number of cameras, we're going to expect to see X number
of additional alerts, and that increases our headcount. So
we follow a pretty linear model in that sense. We also try
to staff people based on alert load throughout the day. As
you can imagine, we get the most false positives during the times of
day that are most active in front of cameras. So if
you think about a school, that's the five minutes every
hour when students are walking in between classrooms. And so during
the day, probably 8 AM to 5 PM, we
staff heavier than we need to during the nights and off times and
weekends. But it's been
pretty standard for us and the more scale that
we have, the more different sites that we're on, the
more data that we include from those different sites
into our model to make it better, the more predictable
our scaling model is. So that part
Now on the privacy side, which
you kind of referenced earlier when I was talking about the whole Overwatch-y type
thing, have
access to these cameras, but you're not watching those camera feeds
necessarily. You get, you'll get like the frame that
was flagged. But you, you train, like
you just said, the data that you gather, you use to
train the models. And so I, and
I'm sure this probably varies customer to customer, but I'm wondering on
the privacy side, you know, if we're, thinking
about schools or public places, these big box retailers, for
example, how you work on the data side
to ensure privacy and what specific components,
I guess, of these images that they'll
use to train your models on? Is it anonymized in
some way, faces cut or blurred out or whatever?
For our training data, we try not to obfuscate
as much as possible because we're concerned
that it will affect the context of the model. But
for any data that's coming from a customer, we do remove
faces so that we're not infringing on any privacy of
the customer. Obviously, it's something that we get the customer's permission ahead
of time. We're very close with our customers in
that respect. But on the data training
side, we've, from the very beginning, tried to
distance ourselves from any sort of biometric detection
or recognition, specifically to address that privacy concern.
So instead of detecting a person and
then detecting a gun, we're strictly looking for a visible
firearm. And all of our data that is
annotated is annotated for that specific firearm. Avoiding
the facial recognition piece, avoiding any sort of biometric analysis has
allowed us to distance ourselves from privacy concerns with
our customers. And because of that, I think our customers trust us.
They trust that we're not looking at their live video. They
trust that we're not detecting
or biased in any way that would affect
And there's also a security side as well. Obviously, there's
security to everything we're talking about your security company, but the security
of the model, the security of the cameras and the warning system.
And you talked about on prem deployments.
But I'm wondering about, and I'm just gonna throw a
kind of stupid, like movie scenario at
you. Would it be possible
or if If possible, how do you mitigate for
someone to somehow gain access to or
hack into your system to trigger a false alarm? I
can't, I don't know exactly how that would work, but if they were to gain access, like,
is that a real security concern on your end? That
your system could become compromised in a way that sends
I would say that's probably lower on my list of concerns. I
mean, getting hacked in general or falling prey
to some sort of social engineering is always in the back of my
mind and probably is for any technology leader or
startup founder. But
that piece of our business, it always runs through the
ZOC. So the only way for an image to be dispatched to our customers is
for somebody in the ZOC to actually click dispatch
and send it out. Like I said, we have
a really tight connection with our customers. So even in the event that
there was an accidental dispatch, we
would have direct line to the customer to be able to deescalate immediately
if that were to happen. So I haven't run any scenarios with
that specific threat in mind. But yeah,
it's always in the back of my mind that we'll get hacked in
some way that will either, you know, expose
some vulnerable information to the world or like
our customer information or, you know, in
this scenario, you know, affect our ability to dispatch or,
you know, send errant information out into the
universe to our customers. So it's always a threat, but
That's good. That's good. I've
got a couple more for you. And the first thing
is something that I've been thinking about, which
is, I wonder how
much you're thinking about or exploring ways that this
work expands beyond gun-specific detection.
You know, Zero Eyes doesn't have anything about guns
right in its name. And the idea of object detection
tied to, uh, warning systems with humans
in the loop connected to first responders seems to
me that it could scale to other situations. I
mean, like you, I even think about a fire, for
instance, in a, in a school building, the same way that, um,
your systems can tell first responders, which camera
flags, you know, an alert. it
would seem to be a very advanced fire alarm for you to be able to tell
someone there's a fire and this is exactly where it is. So
evacuate accordingly, right? Like that kind of thing. Other threats
beyond guns. I wonder if that's something that
From the very beginning, we've been exclusively focused on being the
best at one thing instead of mediocre at a bunch of different things. And
that's served us really well up until this point in the company. That
being said, our expertise in AI and
object detection lends itself really well to expanding into other
use cases like this. I
see it generally as we are a very high-value trigger for
an incident to start. And I would love to expand those
triggers to cover more incidents, things like perimeter
security and intrusion detection, health and safety, to
your point. There's
a lot of work being done on the retail side. with
loss prevention. So I see there's applications
across many verticals that touch the same customers that we do.
I first say that we're going to continue to be focused on guns, but start
to expand into some of these other areas that lend themselves
well. And ultimately, moving
into a future where vision transformers and
large language models are more easily accessible in real time, It
opens up a lot of possibilities. We could take this in a lot of different directions
and ultimately try to solve as many customer value problems as possible. But
yeah, very interested in intrusion detection, other types
of weapons like knives and just aggressive behavior in general,
things like that. And then ultimately we
want that trigger to initiate some sort of valuable response. Today,
that is us sending situational awareness to our customers that
they can respond, but we're still sending a person to deal with a
very dangerous situation. And so I would love to initiate
additional response like, you know,
locking down doors or sending a drone to verify the
incident and just have additional eyes on, I think is a
very likely future for us. If you can imagine a first
responder showing up to the scene after there's been a drone there for five minutes,
showing them what exactly is going on, they have the ultimate situational
awareness that they wouldn't have otherwise. And given
the dangerous nature of first responder jobs and security jobs,
I think it's completely natural that at some point in the
future, there will be some sort of response
Yeah, there's a lot of places you can take it, I guess, when you start thinking
about it like that. Sure. You know, drones, automated
security. I
guess you almost see the beginnings of like a Robocop type thing.
It's definitely hard to differentiate and diverge your thinking between
what is sci-fi and what is real life. And we get questions about
that all the time. Total Recall type analysis
where privacy is a thing of the past. It's
something we're sensitive to, and as much as possible, I want to
avoid privacy issues, because I care about that personally, myself.
But at the same time, there's so much possibility on the response
side to incorporate drones
and robots, autonomous response, into areas where otherwise
Right. I even think specifically for something like
the Hurt Locker, like if that could be done completely with robots, that
would be good. Setting people in the suits to diffuse
these things is crazy. But that also raises an
interesting point that I always think
is interesting because like we've mentioned
several times, you guys have been around for several years before
the kind of boom in AI that we're dealing with now.
And I wonder, I guess, how,
if it got easier to talk
to clients about what your offering is, when
AI became much more in
the public vernacular, And if in a weird way
that also made it a little bit more difficult, because a
lot of it is tied up, as you mentioned, right? With, with
people from outside the industry who are, have a hard time sifting through
fiction and, and, and the hype with what's actually happening.
Yeah. Our biggest challenge in the early days was convincing people that
it wasn't snake oil. And so, I mean,
we ran so many demos. At that point, just trying
to prove to people that we could actually detect guns. And then
they said, well, you know, you're showing us you being
detected with a gun. I want to hold the gun and be detected to
really prove it out. And so a lot of it has been proving out
that. AI security of
the past was maybe embellished
on the sales side to an extent that it caused customers to
think that AI wasn't capable
of performing in real time on security cameras. And we spent the
last seven, eight years trying to change that opinion a
little bit. And then going
forward with the
emergence of large language models and vision transformers
and some of the
common topics around them like copyright infringement, potential
lawsuits, these large language models using
public data in some way. It's
caused a lot of questions about how we collect our data, and that's been one
of our strong selling points to the customer is that All
of this data is organic to ZeroEyes. There's
no risk of us infringing on
any copyrights or using any public data. It's all stuff
that we've meticulously developed in-house
and have scrubbed to maintain the highest quality possible. So
I would say those are probably the two areas that we see overlaps with
Yeah. Yeah, you operate at
the intersection in an interesting way.
It's different from a lot of other AI companies that
I talk to. But my last
point, I guess, to leave off here, is,
you know, I wish you guys weren't needed. And it's
interesting, you start the company,
this team of veterans, in
the wake of a devastating mass shooting and,
you know, that's a problem and a crisis
that hasn't really abated too
much. And so here, you know,
where you sit with a technological solution, and
this kind of goal to, through
detection and better response times to mitigate violence
on the ground, whether that's someone has
an airsoft gun, or whether that's, you know, a mass shooting
might be about to happen, we have to get over there. You're
the idea of the kind of technological solution to a problem that bleeds
beyond technology. I wonder what you think about that and what
other levers should be considered or
if the reality of the world we live in is one where it's
like technology is
kind of the best thing we have left to
I also wish we weren't needed. I think about that almost
on a daily basis. As a company, we follow gun violence throughout
the country really closely. And so I
see on a daily basis the news reports of shootings and
gun violence that happen all over the country. And yeah,
when people come up to me and they say, how's business doing? I
say, it's good. Unfortunately, people keep
committing gun violence. And there's probably
deeper issues at play there. But
when I think about our position with the customer, we're
providing a layer of security. And so when we go
into a new customer, They trust us based
on our track record and expansion to basically
be an expert to them about how their security should look. And
so we're able to really provide this great feedback where, you know,
maybe a customer is struggling with their camera system and
it doesn't even make sense to buy ZeroEyes until their camera system
is upgraded. And so we're happy to make that recommendation to
them because At the end of the day, our performance won't be as
great on older cameras, lower resolution cameras. And
it matters to us that our customers have the highest security posture
that they possibly could. So we're in this really awesome
kind of expert position for our customers. And all
good security comes in layers. We're just one piece of
it. And we're trying to address that gap of being
that first early warning sign to communicate to
customers when there's a visible gun, when there's a weapon that's brandished on
their physical site. Yeah,
going forward into the future, I see a huge possibility for us to
be really ingrained with the customer to be that expert and to
provide that
knowledge that's needed to understand
how to respond to one of these incidents. Hopefully, the vast majority of our
customers will never experience one of these incidents, but with
the prevalence of gun violence and how it's expanding, that's
becoming more and more likely. And it's not as easy for customers to
just say, you know, that's not going to happen to me. They
have to be prepared in some way, and we're in this awesome position to help
them prepare. So I think we're just one
layer in their broader physical security, but we're a
critical layer at this point. I hope I answered your question. That
Thanks. Yeah. But from that perspective, the future
is exciting. But, you know, again, dark
Yeah, we as a company, we have to do well
in order to do good. That's one of our kind of principles. In
other words, we have to be profitable in order to expand to more cameras
so that we can cover enough cameras in order to fulfill our
mission, which is stopping gun violence. And
so we live and breathe it
at a daily basis here at Zero Eyes. And
when people come in to work every day, they're singularly focused
on that mission of ending gun violence. It's a beautiful place to work
because of that, but it does come with Um,
It's a heavy mission. Well, Tim, I appreciate you, uh,
letting me steal you away from it for, for a little bit, um,
and walking me through what you do. Uh, so thank you. Thank
Creators and Guests
