Hi, I'm Johannes. One thing is one thing, I think I've played it before Slice, and I did it before, yeah, a bit short long.
So, if you're a sharer, so you don't need to tell me whatever, I'm not a fan.
So, I'm going to present Bliger Cam, the standard cam model, away.
So, it's just the standard cam way, first pitch, I have to apologize for that.
So, what is Bliger? What am I talking about, actually? So, Bliger Cam is dissecting the title.
Bliger Cam enables us to easily extend the board for different kinds of media.
So, it's really about extending the capabilities of existing gem-optic classes, rather than adding new gem classes.
So, I apologize for everybody who's not that techy, because the talk is a bit, like, very non-programmatic.
So, what's the motivation? Well, one thing is that the board for English acquisition within German has grown for about, like, 15 years.
And it was kind of like an additive process, so it got more messy and more messy than ever something new was added.
So, it was quite an effortless stage.
So, there have been some no-problems up to get online, too. I mean, there are still some no-problems that all of them have been solved.
So, I put three of them, like, three prominent ones down on the slide.
So, one is obviously some new feedback, like, why can't I play back five, two hours, a decade, a year, whatever.
And it's a gem. It's working everywhere else, so it's not very efficient.
The other thing is, like, if you're looking for camera support, there used to be some DC-3094 support.
That's, like, combined with last year, the Pi Pi, it's not the ordinary EE protocol, within German.
But then the invitation was a bit outdated, and as soon as the scam had some support for that, at least we were able to stand together.
By the way, I was a tech exchange and didn't work at all, so we had this code to live in there, not work at all.
Then there were some other user requests in there, so, like, I want to control inspiration or whatever, rather than my fancy camera.
I know that I can't control that by software, because all gun software can't control it.
I can't control that, obviously, even if it's not. So, it was kind of my assessment.
Things that are missing.
Then, of course, we had this, like, there are new features, one and every time you do a new project, they're missing something.
For instance, it was the meteor project, which, when the fish was shot, referring to the keynote,
it was a networking concert project funded by the European Union.
There was some streaming between, like, the agency here, obviously, streaming between Brats and some new cities, like,
like, Paris, or Belfast. The latest, it really means, like, saying,
when you got, let's say, 80 milliseconds, you lay between the video, you capture it in Brats, and you display it in Paris.
So, really, the latest.
Originally, we wanted to display things here, rendering in one city, and display it in the other city.
So, we had to find a way how to get video data out of camera, put it into a networking streaming engine,
and receive it from the outside. So, it was kind of a life-bit-shaking thing, like, outputting video data.
And then, the real context with me, like, two years ago, one and a half ago, I don't know exactly,
because he approached and he gave me a different camera, which was a very nice device, but it didn't work that well.
So, we actually wanted to have that.
And then there was the great big ouch that happened, like, I think, two years ago as well.
So, Apple released OIS 10.6, and there are 64 bit implementation of OIS 10.6,
QuickTime was finally dropped, and all of the code from Gem, like, all of the video deposition code from Gem for OIS X,
and it depends on QuickTime. So, actually, it gave us the real work on 64 bit with OIS X.
And so, it still really doesn't work, but what I'm sending here is the way how to sort it wrong.
So, talking about image acquisition, I will mainly talk about image acquisition,
but it stands about, like, acquisition or interfacing frameworks as well.
But I use image acquisition as an example.
And I think that the real world, we would usually start to do a lot of live video and play with Gem.
For each operating system, we have to, like, access interface one, one other framework,
and that is all of it.
So, you know, here is the device architecture.
It's just use this framework to provide access to the cameras,
and all of the patient developers just access this framework to get access to the camera,
and everybody is done.
But there's the real world, which is not ideal at all.
And that basically means that you have many different frameworks,
and in between the PC hardware and the roadification.
There are many reasons why there are many different frameworks.
For instance, other frameworks are changing,
and they can get updated because of emerging technologies,
like VDF Linux 1, for instance, was designed, like, 15 years ago,
something like that, and turned out to be not that channel,
and not to be extended enough.
So, it's changing because of Linux 2.
And the same has done in the Windows world, where you have a little bit of it.
And then we have proprietary frameworks as well,
that are just doing their own thing.
That's mainly, like, my recent experience is maybe with GeoGeek and had cameras,
where we have had some manufacturers, these cameras,
and all of them are basically providing the wrong framework of existing cameras.
So, we had one of the production technologies,
which were actually, like, sponsoring this conference as well.
So, we got both of them.
We have a file in the power company,
which was just other companies,
but we can actually get that from, like, other companies that can be done,
or possibly.
So, I mean, they're just doing their own thing,
and they're providing existing cameras,
but they're doing new frameworks.
So, it turns out that we have a series of frameworks,
and it's really complicated.
So, we have, like, a camera that's applied to the computer,
and then there's a huge number of frameworks that we have to choose from.
Some frameworks support all the cameras,
some frameworks support a special kind of camera,
and some frameworks go around in a sport and acting anymore.
So, the problem is, from the application side of view,
if we want to support all the cameras,
we have to support all those frameworks.
And this problem, obviously, applies to all the computer decisions
if you'd like to capture it,
but all kinds of things, like film, or video, or something like that.
So, there are three options,
which are kind of, like, depending on the point of view,
you're looking at a problem.
One thing that's been developed is point of view.
For developing things, one thing that's really, like, happy
is to be able to be safe on having a special code
where you access certain different frameworks.
It's mainly, like, there's lots of legacy code in there,
like, Engender is a little handy at one,
support which is a library that has been updated, I think, for ten years or so.
The code is still there.
It's quite outdated.
I don't think you can actually get it in the recent distribution of events.
So, the code is still there, and it keeps evolving,
and I don't want to throw it away.
So, sometimes it's not like,
I want to keep the code in the repository,
but I don't want to back it up.
I've been able to do it very quickly, and I don't think it fits me.
And one other thing is,
you're changing one little thing,
rather than implementing one,
and fixing one back in, say, the direct show,
that might have been rebuilt by Engender,
and get it by future, too.
So, yeah.
About 60,000 lines of code,
like that.
That's a lot.
Let's just keep on.
So, the other thing is that,
I just wanted to give you a very exciting distribution,
giving binaries away,
for people who don't want to compile themselves.
And the more libraries you have in there,
supporting different formats,
whatever the bigger the package actually gets,
because you have lots of features in there,
lots of library dependencies,
lots of framework dependencies,
and most of the generation doesn't need them,
by any ordinary use, for instance,
probably of all of you here,
like two people or three people,
have a very few of the different cameras.
Probably two of you have a very few of the different cameras.
All the rest of you,
really doesn't matter at all,
but to the extent it would have support for this camera.
But it adds a lot of library dependencies.
And so, the problem here is that,
if you do compile a library,
the big damn binary with these dependencies,
then you have to ship all the libraries,
and it's just important to make them loadable.
Finally, there's the user point of view,
which is probably simplified again into the video
at the physical thing.
Like, actually, like it used to be until 1992,
because we had one big video,
one big video object class,
that was the official object,
but in reality, there were several different objects,
including all big video linux,
big video darken for OSX,
big video direct show, whatever.
And they were just like having a liais name,
and it would be just every type of big video,
that one of those three or four video objects
would actually instantiate,
depending on which path from you are.
The problem with that approach is actually that,
because those are different,
in reality, different objects with the same name,
might have different user interfaces
that different messages direct to.
For instance, you have this dialogue message,
which is only working on OSX and Windows.
You have a normal message,
which I think only works on linux.
You have an offset xy message,
that only works in SGI machines.
So basically, we have a set of messages
that could work in some operating systems
and a lot of other ones.
Basically, it means we take a batch, develop it,
and put it in your SGI,
and then transfer it to your Windows machine,
or whatever.
It might work or might not work,
just because the API is different
if you're accessing things differently.
Another thing is that some of the backends
are actually buggy,
even because it's implemented in GAN,
or because the library is buggy.
So whenever you apply, for instance,
if you're applying the MPEG to open
a media file, you just crash.
And you crash GAN.
It's kind of weird work grounds
to avoid using the special libhack backend.
Once it's combined,
then the user has no easy way
to just completely disable libhack once and for all.
So another solution, which is,
like on the VDP,
is basically just to provide objects,
different objects for each framework,
like hide them within one umbrella object,
like a VDP video or thing.
So it's basically very clear
that it's just a different object,
and it might work differently.
That just works.
The good thing is that you have no constraints
on the API,
that it will affect the other APIs.
For instance, you can have a video
for Linux 2.0 device,
you can set some properties,
which you just kind of do
with the Difference 1.0 project.
There are different objects.
The con is obviously that
the hardware is hard-coded in the batch,
so you're writing a batch on Linux,
using VDP video for Linux,
and that just won't work on OSX,
because there's no video for Linux on OSX.
So it's not possible
to kind of bring your batch to another place.
So, what to do?
The final solution I came up with
is basically just like,
look, maybe three,
I was just trying to take them separately,
like having the,
that's my better point of view,
just keep the problematic curve
outside of the core of them,
which is a big experiment somehow.
From the next point of view,
we do actually the same,
but not as a source code,
but as a binary,
it's just like,
keeping the problematic dependencies
outside of the core of them,
not make the core of them depend
on quick time or whatever,
and do it extra somehow.
And the user point of view,
that's a bit different,
because it's just like separating
the backend interface
from the user interface
in a very abstract way,
a very generic way.
And that also,
I'm very big like in plugin system,
so plugin system it is.
It's implemented in C++
because the enter of chance
and then in C++,
which is the way to do it.
So, it's quite easy to use
plus hierarchy is to provide
common infrastructures for all the things.
So, I probably should skip this
because I'm in the rest of the page.
Probably the important thing down there
is having plugins is really great
because if you,
especially from a distribution point of view,
you just don't have a library installed,
you have to plug in installed,
then game will load,
if you try to open the plugin,
then it will not be able to load the plugin,
it will just refuse to load the plugin,
that is it.
So, the rest of them
will just continue to work.
And that's probably fine
for many situations.
So, the basic idea of a plugin
is to separate user interface.
The PD API is how you as a user
are controlling the video stuff
from what's actually going on in the backend.
So, the new boxes,
those are various backend,
and they're communicating
with its video in one way,
and you as a user are communicating
with its video in another way,
in the PD way.
Well, one thing is PD,
now it's like C++.
So, because we want to have
a very, very easily usable
that would be nice
for an auto-loader mechanism.
So, we just drop in new plugins
and they will automatically be recognized
by the system without anything
to do anymore.
Just drop them into a special folder.
So, to do that simply,
just as special names,
for all those plugin file names,
you just click the scan underscore,
then the type of the module,
like the video class,
video class, video plugin class,
or a video that has a unique name,
like the darling of the earthquake,
and installs that with files,
and then there's the system-specific extension,
like the native one,
like s.o.pll,
like not trying to do some fancy stuff,
like PD is PD,
and it's called in X score,
L underscore,
high thirties,
three, eight, six,
and then put those files
into the very directory,
very game-binded relays,
and that is again,
just like do a wildcard search for them,
and load everything that's
still on-demand actually.
So, whenever you request
for a video,
for a big studio object,
then it will look for a video,
video plugins,
not before,
so don't give it a footprint though.
Then, of course,
you have a small number of,
like, communicating
between both the
big studio object,
and the back-ends,
and the user interface,
and the big studio object,
in a very generic way,
in an extendable way,
because the basic class
control is quite easy to implement,
so it's just like,
if you're doing media capturing,
you just have a function that says,
get in your friend,
get in your friend,
but then you want to say,
ah, it's really crazy.
Then you want to do things like,
control the exposure of the camera,
and you need a way
to also specify that.
The problem that there is an extension,
probably, for this camera,
that has a lot of exposure function,
you cannot control the exposure
thing with the PS3,
for instance,
or you might just have to add it very much
on the hardware you're using.
It's dependent, actually,
whether you're using a frame cloud,
a frame cloud, or PCI,
or a real camera,
because like a banded zoom camera,
you can control the banded zoom on the frame cloud.
And it's dependent on the frame cloud.
So it's kind of
dependent on many things,
better layer,
so it controls available,
and what they actually
target upon.
The basic idea is just to have
a property system,
which is a key value dictionary,
you just have a symbolic
key name,
which says exposure,
and then you provide
the value, which says 3,
something like that.
It's quite similar to PDMessage Systems,
where you have a symbolic selector
that has a number of items.
But I thought it might be nice
to have plugins to model where
they're running in a PD context,
so they kind of like,
very generic,
and whatever,
you want to switch to
an XMXB or a XMX,
or what we could just probably
make it easier to reuse those plugins.
It's probably just more academic,
and they do,
yeah, I think about it.
So the more important thing is that
there's a very,
very support,
so basically I can just ask
the guys,
well,
do you have some properties,
and if so, which ones?
Are they readable?
Are they writable?
And,
and you just return a list
of all the properties,
and then you can react on the button.
You will get it,
it's in the PD version,
you can just react on it.
And,
then the very properties
will just pass this entire dictionary,
just like a list of key value
errors,
and pass them to the device,
which is nice,
because then you can do
atomic
property settings.
Basically,
it's just like,
if you imagine,
you have a property,
a group require restarting the entire camera,
just like resetting the camera,
because you're changing the image size,
like that,
and that takes,
say,
five seconds,
something like that.
So,
whenever you change the size,
you have to wait for five seconds,
and,
but then imagine you have,
like,
four parameters,
all require restarting,
and it's probably nice
to do that,
to do that,
like restart the camera
four times,
to do it,
like,
turn it off,
set it,
do the settings,
and turn it on again.
So,
the idea of an atomic
atomic
number of parameters.
So,
probably,
I can just show you
a little bit how,
how it actually
is meant to be set.
So,
here's your,
oh,
this is Jim.
Hello,
we created a window,
we have a picture here,
and,
so,
what it,
what we can do now is,
just like,
say,
cross and just like,
give us a huge list,
we'll send that to,
to the second product,
next video object.
And,
actually,
we just asked the,
the currently used plug-in,
with the currently used camera,
which controls,
we,
are we actually there?
Like,
we have a,
like,
brightness control,
the third line,
and we have a,
contrast control,
from below,
and there,
split into really,
like,
probabilities, so,
then we want to change something,
if you're good,
where to,
where to come in there?
So,
here,
yeah,
yeah.
So,
if it doesn't set,
probably,
separation,
because,
you know,
it's both,
separation,
because that's what you got back,
and,
say,
and set it to a value 64,
and to set,
cost,
brightness,
brightness,
brightness,
brightness,
brightness,
so,
brightness to around.
Oh yeah.
So,
but understanding these,
messages,
to,
expedite doesn't do anything,
just like,
adds properties to a dictionary,
and then,
when,
if you can apply properties,
then it'll send,
all the properties it wants,
to the,
to the backend driver,
and the backend goal,
just like,
try to,
then,
then it'll be a properties,
it knows about.
So,
it will use like,
another property,
which is called,
write,
for instance,
it will not do anything, even if it is a pipeline.
Because the set properties in the pipeline,
this is a bit complicated.
Usually your match is one to set a single thing
and be done with it.
There is a special version called
that matters about set separation zero, for instance.
And that will just do the same set cross separation zero
and the pipeline properties immediately.
So let's take an error to the separation
and we'll be done with this.
That is quite an issue.
So when you're changing the device, for instance,
using another camera, there is, if you click on enumerate,
which has to get all the available devices,
if they're enumerate and enumerable.
So we have the VDLinux, the VDLinux device,
which we're using right now, and there's a test device,
which is not the dummy implementation,
and it reduces white noise, but not very white,
otherwise.
And doing this enumeration, enumeration properties,
again, we see that there are a complete different list
of properties, for instance, that's only with eight,
it's with dimensions, and there is a new property,
it's called type, that is just a symbol.
So we can say, send a symbol like red,
and now we have a, it just like controls
one property of this test device,
which is a square, green, and also one less.
So basically, whatever reminds you of a test,
you can have different properties,
you can control them from your test,
you can get information about which properties are available
and the countries to use them.
You can get properties as well,
be your properties.
So, as it's a pro-sharp,
and there's no value attached,
basically meaning, it's probably not there at all,
but it records the way to get it.
It's basically, yeah, we just want to know
what the fact is that we have a lot.
So, you just skip that, it's really, really good.
And you do compile it in the gem,
and I'll take a lot of the plugins,
so they're all loaded on demand,
but you need them, or not loaded,
and you can reinstall them, if it was big, so.
And some new plugins have been added.
Mainly in the video class,
for supporting media with Ethernet camera,
we have this media, ADP,
they're all from media, island plugins,
that would access these proprietary frameworks,
which are usually not shipped with M only,
because we can't really ship them
because it's proprietary licenses,
so it's not good to ship them,
but we extend it, for instance.
Of course, there's way more things to do.
One of the most important things, probably,
is the OSX stuff,
because right now we still only have plugins
for the QuickTime framework,
which is not working on 64-bit OSX,
so they would need to write some media
with QtQt or Qt10 plugins,
which I don't have the capacity to do for,
so if some of you is using Apple OSX development,
that was a little bit about the QuickTime next,
I would be very thankful for some help
with these things,
just to make you can finally work on
recent versions of OSX.
That is, there actually is a real life
for a video connected framework,
which I have tried because I have not had the hardware,
so accessing the sensitive data and connecting to this,
what would be nice would also be to have
some rapid frame server thing,
just to output this data to media pipes,
some, let's see if I found some OSX
and frame service functionary framework for Windows.
There is already a media pipe functionality
for Linux there,
so we can record media for these new active devices.
Record is probably a bad name, but it's just like,
it's output, stream of images.
And then, all we have, I haven't mentioned that,
we have four types of plugins right now,
it's basically like film,
like for getting video files,
really video files,
so it's video for live image acquisition,
and we have image loader, image state, yeah, all of that.
So basically, it's just really, you can quite many features.
And so, what would need probably to be done
is to add a new back and back for 3D modeling,
because in GAMF has had this object loader for ages
in there, and live space on objects,
so I think it's hard to, I mean,
you have to add a software to convert your existing models,
which run to this object file format,
and there are some libraries like that,
so that could just enable us to load
a number of different 3D models,
but never I touch that code,
I will have to first design a plugin for that,
because I really want to explain it to you.
One big problem with the current implementation right now
is that it's actually using C++,
maybe because C++ is doing name-manualin,
but compiling, I think it's a binary code,
which means the, well, it doesn't mean it yet,
but because name-manualin is not standardized
between different compilers,
this means that it kind of run the plugin compiler,
say, Visual Studio, within a host in GAMF,
it's compiled with C++,
so this is actually not a big problem
for platforms like Linux or OS X,
where it has virtually only GCC compiler,
and it's definitely a problem on Windows,
where we do have C++, like,
it's using C++, but the GAM library
is usually built using Visual C++,
so it will not be compatible,
and using a C framework that might solve this issue,
because C++ is stable in that bug-defined binary area.
So, that to almost conclude,
and we really do think all this plugin system is nice,
but if it were way nicer, you wouldn't need that,
if you had just one frame, one single framework
for each platform, where the hardware manufacturers
could just, like, make their devices accessible
from within these frameworks,
and we don't have to care about, like, having thousands
of plugins, we don't have that one plugin for a step,
something like that, so there are standards,
and there are degrees, it's why there are standards.
So, and really, it's not good to do the kick-work,
again and again and again, so, for instance,
we have three communities in the cable,
like it's right now, and, I mean,
so we basically are accessing the same devices
from within these three plugins,
and each area application that we would like to use
these cameras would have to duplicate
the code that was done here.
So, I think that's about it, so all this plugin stuff
is gonna be within the O93 release, again,
which is about to be released,
and there is a Windows binary review available
at the japan.ed.com page, and so you can download it
and try it and report things if you're running Windows
and not needing to compile yourselves out.
I assume that the OS X, and there are some people
who might be able to compile themselves.
So, yeah, I think time wise, it should be possible
to release within the next two weeks, two or three weeks.
So, it's good.
Thank you very much for the attention and questions.
Thank you very much.
It is simple enough, so, no questions?
We like to use them, and do use them, and try them.
I mean, I try to, like, follow the market,
but I kind of, I hardly ever use OS X in Windows,
so it would be really important to have
people test things there.
That's a question.
Will there be a, or I can't speak loud,
will there be a plugin where you can write
phrase in as a container?
Yes, there's a record, like, actually.
Ah, that's what you meant, right?
Yeah, right, so there's a record, actually,
that's two things.
Originally it was conceived for doing,
like, recording videos that don't move files.
Yes.
So, you can write and read from the same container
at the same time, so I can have an endless delay.
Do you understand what I mean?
I'm not sure what an endless delay does.
It's a bit of a problem, it's hard to handle.
What I mean is, you put, is it not possible
to write pictures into a container and read
while you're still writing,
you would start reading the first pictures?
Oh, well, yes, it's just, like,
depending on the container you're writing to.
Like, one thing that is there is, like,
the video for Linux device, the moveback device,
which is just, like, it's a virtual video camera,
and the tracker video for Linux, you can classify to that.
And then, at the same time, you can use them
or whatever other application to open this virtual camera,
and they'll just play the, you mentioned the example.
So there's no delay built into this device,
into this very device, but you can try different things,
you can try different things, use your pipeline,
do whatever you want to do.
In any case.
Well, yes, obviously.
That was a really good example.
Yeah, good.
I mean, you can use it to send it to around the world,
and then we can have it.
So, okay.
Is that another question?
Another question?
What do you think it's running?
What is running?
I mean, you've seen it.
It's running in the next month, so hopefully,
in the next month.
Yeah.
Yeah, my question is,
we already talked about it, it's the OSX support,
and so, if I understand you correctly,
there's absolutely nobody right now
in the general community who's using OSX,
in this respect, and could be working there,
and there's no maintenance.
I mean, just that is not very responsive.
Yes, okay.
So, there's actually nobody right now working there,
so it's an open position.
Yeah.
We get all the fame,
and it's really the last screen of camera,
it says prominently your name here,
and so please do that.
Thank you.
