I think it goes counter to what c++ is all about. It’s always stayed as just a language, even if you count STL. Languages like Java that get into things like that have really gotten bogged down and become specialized.
Just to mention, they do not want to include Cairo in the standard at all.
If you read the proposal they want a simple 2D graphics interface in the standard and explained why not starting from a c++ lib, and why cairo was an interesting starting point.
One of the writter (H Sutter) is in charge of c++ devs at microsoft and explains how a 2D interface could be standardized and be based on Direct2D behind.
this is not about language standard but standard library. stdio is as much domain specific as 2D could be. you don’t have to link it, but if you do, it could be nice to have a well designed interface that is STL compliant, you could maybe change your runtime if you want. Anyway a lot of company are currently reimplementing all this for tablets and PC portability by using openGL primitives so it sould not be bad to consider something that everyone does and that is pretty basic.
Compression formats are all standardized, so it doesn’t matter which vendor’s tool you use.
NVTT contains a library that you can link into your own projects, as well as a set of command-line tools, and Compressonator is a graphical tool (it may also have a command-line interface; I’m not sure). Also, NVTT is open-source while Compressonator isn’t. So it all depends on what you’re looking for.
ok, it’s sounds and look really cool but, are the movements that of what the NN learn? or is it learning as it goes? What prevent it from moving off screen? I guess I’m still not clear on how you get NN to work with what’s on the screen. Is every pixel an output from your NN?
I’m going to the Steinway Hall in London next month to show some new stuff I’ve been working on! scrolls through Heisenberg bowler hats
Guaranteed won’t pass. It would start a chain reaction where next follows 3D, then audio, then physics, etc. All of those APIs could become standards too, but one quick look at history will show you that people are so different, can never agree on anything, and don’t like a single solution. And a standard isn’t a standard unless people adhere to it. I doubt Microsoft, Sony, Nintendo, Apple, Google & co will be willing to sink mega dollars into implementing that standard when they already have their own designs and technologies.
In principle though, C++ is a language and should remain that way. Improve the language and its syntax, let the community produce substance with it. APIs improve over time, become easier, more flexible, more innovative, more up-to-date. You can’t do that with a standard. STL vs Java and .NET BCL is a fine example of API evolution.
Since we are on the topic… Does the Compressonator for ATi GPUs work for nVidia GPUs? I know there is one for nVidia i.e. the nVidia Texture Tools but overall reception on the internet about that tool is mostly kind of negative.
It is random to support domain specific libraries in a language standard.
Hell hell no
Worse idea since someone said “hey that Hitler bloke, he’s kinda charismatic. How about we make him leader”
this video kinda explains it, you can see it working better, it just has a feature for every cell, it similarity matches to it by itself… so the end product is a subset of the total features.
but… its not really working properly :) im still working on it.
in the old days!
I STILL have to decode manually on some platforms. :<
A lot of set top boxes and TV’s have really good GPU’s, but since texture compression is not a requirement for OpenGL ES, just an optional extra, many devices don’t support it.
I used the form on their website, after contacting Juan directly from my email he replied the following day. There might be something wrong with their form, however i own an apology.
Well, I am not much of an artist, just basic Photoshop. Uni2D seems a bit out of my reach. Just wanted to know whether my sprite alignment method was right. Thank you, TheNut. Your help has been great so far. Giving both your replies the up arrow (like button?) beside the post. I’ll be sure to ask if I have any more questions! :-)
EDIT: OK… So, it increases the rep. Cheers.
The easiest way to animate sprites of different sizes is to enlarge the canvas so that all frames are the same size and are centred in the image. This way the animation appears proper for all frames, but also leads to texture waste.
For tightly cropped sprites, you have the right idea. You need to pin a source object and have your sprite offset to align correctly with the animation. This means your transforming the actual rectangle and not the texture. So when your sprite frame increases in size by 20 pixels, you need to enlarge your rectangle to compensate, and then offset it so that the animation is pinned correctly. This is pretty tricky and some artists I know do this by hand. They have tools and whatnot to help them, but generally it’s quite manual.
A alternative solution is to look into skeletal sprites. For example, you can checkout this tool for Unity. Break your object down into smaller pieces that you can join together into a skeletal structure. This isn’t always possible though, so it depends on your needs.
just how did you manage to do that with NN?
Thank you so much, TheNut! Since I couldn’t find any tutorial, I did assume it was something based on logic. Only thing, I didn’t know how to approach it. You did save me a lot of head scratching, and hopefully I still have most of my hair. Your very clear and detailed explanation helped me understand it very clearly, thank you. One more question, though… How would I go about animating sprites of different sizes? And I am not talking about empty pixels of wasted space. No, this is about a series of sprites that are already packed. They are of different dimensions.
It’s like this… The first frame sprite is a character standing idly, so his whole body takes center frame in the sprite. Next two frames show him outstretching his arm towards the left, and the final frame shows his arm completely outstretched. The final frame has his body to the right of the sprite image and his arm to the left end of the image. Basically, the final frame is wider than the first.
So, if I animate it as it is, his body offsets to the right a bit. Now, throughout the sequence, his head does not move, so what I was thinking was maybe putting an anchor point on any fixed point on his head and use this anchor point while animating the sprites. That way, his body stays where it is and his arm moves forward. Is this the right way of approaching it?
But then, some sprites show him bending, so head might not be a great anchor point. I was thinking his feet would be better as they stay fixed on the ground. Depends on the sprite… But the point is, am I approaching this correctly? Please clarify this. Thank you.
man, i buy top of the market and just raytrace em, quick as reflections, quicker if your doing refractions too.
Reedbeta posted on article discussing the technicalities of texture compression if you’re interested. The article is available here.
As Vilem pointed out, drivers can do both decompression and compression, although for quality purposes you should use an offline tool for that. For high quality, compression can actually take quite a bit of time. Keep in mind the output is just raw image data. You need to wrap this in your own header format to store important details about the texture (resolution, mipmaps, compression type, etc.) and deal with that when you’re uploading the data to the video card.
When working with texture compression, you should also poll for the supported formats on various platforms. S3 compressions are well supported on desktop hardware, but Android uses Ericcson texture compression. In the old days, one had to even decompress manually if the hardware didn’t support it (ghastly!). Just something to keep in mind.
I sent them an email and got a response less than a day later for the download link. From my understanding okam is going to do a public release of the Godot engine next month. The scripting language is GDscript which is similar to python.
I’m not aware of any tutorials on this subject. I developed my solution simply on intuition. You seem to have an idea what to do. You know for instance that you need a proper timer to cycle between frames and you’re looking for how to work with sprite sheets (aka: sprite atlases), so I’ll try and give you a few pointers.
I assume you are already fluent with OpenGL textures? How to load them into video memory, bind them, and place them on polygons? That’s about 80% of the problem. The other 20% is how to display only portions of the texture on the screen (see point 3 and 4).
2. Sprite Sheets
You will need some sort of file format that describes your (x,y) locations and width x height dimensions of all the sprites in your sprite sheet. I use my own Texture Packer tool, which outputs a single image file + XML file describing the coordinates. There are other tools out there more dedicated to that. Just doing a Google search for “Sprite Packer” will net you a few links.
It’s important to pack relevant textures into a sprite sheet. Sprite sheets serve two purposes. The first purpose was to reduce wasted texture space back when textures had to be a power of two. Simply uploading a single 100x200 image would expand to 256x256 in video memory, creating waste. Although that’s not a big concern anymore (video graphics has matured), speed is another key reason. Binding a single texture and then rendering a dozen sprites is more efficient than binding a texture for each sprite. This is most important when you get into rendering bitmap fonts (it follows the same principles as this).
Once you load in your sprite sheet, you want to translate the coordinates into texture space. When you deal with images, you often work in pixels. A 320x400 image for example. In video graphics, texture coordinates are normalized. That is, they are represented between 0.0 and 1.0. So let’s say you pack all your sprites in a 1024x1024 texture. Let’s say your 320x400 image is located at the offset (x,y) = (200,100). You need to prepare what I call a “Sprite Frame” that describes these coordinates in texture space. Simply put:
Frame.X = 200 / 1024;
Frame.Y = 100 / 1024;
Frame.Width = 320 / 1024;
Frame.Height = 400 / 1024;
You now have the texture space coordinates for your sprite.
4. Rendering the frame
I assume you know how to render polygons in an orthographic view (although you could also render sprites in perspective if you wanted). Regardless of your geometry and its transformations, if you wanted to render the above sprite on the geometry, you would pass the sprite frame coordinates into your vertex shader and do something like this.
spriteUV = SpriteFrame.xy + (UV * SpriteFrame.zw);
You pack your sprite frame into a 4d vector, where (x,y) represents the normalized top-left position in the texture and (z,w) is the width and height you calculated in step 3. UV is the original texture coordinates of the geometry, which generally should be planar. If you visualize this in your head, you’ll see that the sprite will be drawn to fill the entire area of the geometry (typically a quad). In your fragment shader, it’s a direct texture assignment:
gl_FragColor = texture2D(Sample0, spriteUV);
Timing is a bit more involved. Don’t think of timing just for your sprite animations, think of it as a global feature that you will want to use all over in your engine. I wrote my own timer class, which is based on an event and delegate design. In my render/update loop, I update the core timer (a singleton event that all instantiated timer objects listen to). Each timer object has a set interval and when that interval has passed, it will dispatch an event and notify the delegates. In my sprite engine, each timer triggered event will advance the sprite frame of the animation, which is generally defined in the sprite sheet. It looks a little something like this.
void MyRenderLoop ()
void SpriteClass::InitTimer ()
// Set interval to 30 FPS
mMyTimer.SetInterval(1.0 / 30.0);
void SprintClass::OnTimer ()
Hopefully this should get the idea across, but there’s a couple edge cases you have to take into account, such as when the frame rate of your game drops. You may want to skip frames, in which case you would need to check the interval that has passed and advance the number of frames based on that value. In my case, I would do this by setting the timer’s interval to 0 and getting updates every frame, then checking the elapsed time. Generally you don’t want to skip to many frames otherwise your animations get chaotic, so you have to design it with an upper bound in mind.
Hopefully this will get you started. Most of this stuff should feel intuitive. As long as you know the OpenGL API and how to render surfaces, textures, etc, then this should just be an application of logic.
Ok… So, if I wanted good quality textures, I should use an external tool to compress the textures and then load the compressed texture file into the program using the methods provided, right? I think I got it. Thanks, Vilem. I’ll ask if I have any further questions.
Huh, you’re mixing two things together…
Texture filtering - the main purpose of this is to reduce aliasing of applied texture on image. For the purpose of this we use texture magnification filtering (aka when 1 pixel in texture covers more pixels on screen), and texture minification filtering (aka when several pixels in image covers 1 pixel on screen).
The minification filtering is often done using MIP maps (multum in parvo) - I guess you know the principle, just put it into wiki and read how it works.
Texture compression is something else, this relates to texture storage. You can store textures are RGB8 (8-bits per channel), RGB32F (32-bit float per channel), R5G6B5 (5 bits red and blue, 6 bits red), S3TC DXT1 aka BC1 (this one is compressed using S3 texture compression algorithm - this is compressed texture format).
OpenGL supports hardware decompressing of several formats (F.e. named S3TC), and it can also compress these textures. The quality of compressed textures by hardware is low, so it is recommended to compress using other tools (I use my own), but F.e. The Compressonator from AMD works perfectly.
two days after this has been posted… I don’t feel like meditating more on this, I’v already moved on.
Don’t compare unreal to unity, its reputation is light years ahead. Check released titles and compare again.
How long ago did you contact them?
You can see above that the original poster has been very prompt in responding to a couple of initial enquiries – maybe you should exercise a little more patience if it hasn’t been long?