Id,PostTypeId,AcceptedAnswerId,ParentId,CreationDate,DeletionDate,Score,ViewCount,Body,OwnerUserId,OwnerDisplayName,LastEditorUserId,LastEditorDisplayName,LastEditDate,LastActivityDate,Title,Tags,AnswerCount,CommentCount,FavoriteCount,ClosedDate,CommunityOwnedDate,ContentLicense "139581","1","139873","","2017-04-05 02:44:50","","0","229","

For a personal project, I'm trying to implement a tilemap. My goal is to obtain this

But all I got is this

I am calculating the UV coordinates inside my shader, with this formula. 8 and 13 is the width and height of my tileset, in tile.

vs_uv = (1.0 / vec2(8, 13)) * (vec2(tile / 8, tile % 13) + vertex);

Where tile is one element of this array

GLint map[] = {
     0,  1,  2,  3,  4,  5,  6,  7, 
     8,  9, 10, 11, 12, 13, 14, 15,
    16, 17, 18, 19, 20, 21, 22, 23,
};

and where vertex is line of this array

GLfloat vertices[] = {
    0.0f, 1.0f,
    1.0f, 0.0f,
    0.0f, 0.0f,

    0.0f, 1.0f,
    1.0f, 0.0f,   
    1.0f, 1.0f                  
}; 

Here's the full code of the example :

#include <algorithm>
#include <memory>
#include <cstring>

#include <boost/filesystem.hpp>
#include <boost/multi_array.hpp>
#define GLEW_STATIC
#include <GL/glew.h>
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/type_ptr.hpp>
#include <SDL2/SDL.h>

#include <pck/global.hpp>
#include <pck/program.hpp>
#include <pck/utils/stb_image.h>
#include <pck/window.hpp>

namespace fs = boost::filesystem;

const int WIDTH = 800, HEIGHT = 600;

GLint map[] = {
     0,  1,  2,  3,  4,  5,  6,  7, 
     8,  9, 10, 11, 12, 13, 14, 15,
    16, 17, 18, 19, 20, 21, 22, 23,
};

GLuint map_w = 8;
GLuint map_h = 3;

int main()
{
    // Wrapper around SDL2 window, also initializing OpenGL
    pck::Global::window.reset(new pck::Window(""Test_gl"", WIDTH, HEIGHT));

    // Vertices buffer
    GLfloat vertices[] = {
        0.0f, 1.0f,
        1.0f, 0.0f,
        0.0f, 0.0f,

        0.0f, 1.0f,
        1.0f, 0.0f,   
        1.0f, 1.0f                  
    }; 

    // Positions buffer 
    glm::vec2 positions[24];

    int index = 0;
    for(size_t j = 0; j < map_h; ++j)
    {
        for(size_t i = 0; i < map_w; ++i)
        {
            positions[index++] = glm::vec2(i, j);
        }
    }


    // Texture declaration
    GLuint tex_ID;
    glGenTextures(1, &tex_ID);

    int w(0), h(0), c(0);
    unsigned char* data = stbi_load(""tileset.png"", &w, &h, &c, STBI_rgb_alpha);

    if(data == nullptr)
    {
        std::cout << ""Failed to load texture\n"";
        stbi_image_free(data);
        return 1;
    }

    glBindTexture(GL_TEXTURE_2D, tex_ID);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

        if(c == 3)
            glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, w, h, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
        else if(c == 4)
            glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);

        glGenerateMipmap(GL_TEXTURE_2D);
    glBindTexture(GL_TEXTURE_2D, 0);

    stbi_image_free(data);

    GLuint VAO;
    glGenVertexArrays(1, &VAO);

    glBindVertexArray(VAO);
        GLuint vertex_VBO;
        glGenBuffers(1, &vertex_VBO);

        glBindBuffer(GL_ARRAY_BUFFER, vertex_VBO);
            glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
            glEnableVertexAttribArray(0);
            glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(GLfloat), (GLvoid*)0);
        glBindBuffer(GL_ARRAY_BUFFER, 0);

        // Also set instance data
        GLuint positions_VBO;
        glGenBuffers(1, &positions_VBO);

        glBindBuffer(GL_ARRAY_BUFFER, positions_VBO);
            glBufferData(GL_ARRAY_BUFFER, sizeof(glm::vec2) * 24, &positions[0], GL_STATIC_DRAW);
            glEnableVertexAttribArray(1);
            glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(GLfloat), (GLvoid*)0);
        glBindBuffer(GL_ARRAY_BUFFER, 0);   
        glVertexAttribDivisor(1, 1);

        GLuint tiles_VBO;
        glGenBuffers(1, &tiles_VBO);

        glBindBuffer(GL_ARRAY_BUFFER, tiles_VBO);
            glBufferData(GL_ARRAY_BUFFER, sizeof(map), map, GL_STATIC_DRAW);
            glEnableVertexAttribArray(2);
            glVertexAttribPointer(2, 1, GL_INT, GL_FALSE, sizeof(GLint), (GLvoid*)0);
        glBindBuffer(GL_ARRAY_BUFFER, 0);   
        glVertexAttribDivisor(2, 1);
    glBindVertexArray(0); 

    // Shader for instanciated tiles
    pck::VertShader vs(std::string(R""(
        #version 330 core
        layout (location = 0) in vec2 vertex;
        layout (location = 1) in vec2 position;
        layout (location = 2) in int tile;

        uniform mat4 model;
        uniform mat4 projection;
        uniform ivec2 tileset_tile_size;

        out vec2 vs_uv;

        void main()
        {
            gl_Position = projection * model * vec4(vertex + position, 0.0f, 1.0f);

            vs_uv = (1.0 / vec2(8, 13)) * (vec2(tile / 8, tile % 13) + vertex);
        }  
    )""));

    pck::FragShader fs(std::string(R""(
        #version 330 core

        uniform sampler2D image;

        in vec2 vs_uv;

        out vec4 fs_color;

        void main()
        {
            //fs_color = vec4(vs_uv, 0.0f, 1.0f);
            fs_color = texture(image, vs_uv);
        }
    )""));

    std::shared_ptr<pck::Program> program(new pck::Program(vs, fs));   

    // Uniforms
    program->use();

    glm::mat4 model;
        model = glm::rotate(model, 0.0f, glm::vec3(0.0f, 0.0f, 1.0f)); 
        model = glm::scale(model, glm::vec3(16.0f, 16.0f, 1.0f)); 

    pck::Global::zoom = 2; 
    glm::mat4 projection = glm::ortho(0.0f, static_cast<GLfloat>(pck::Global::width / pck::Global::zoom), 
        static_cast<GLfloat>(pck::Global::height / pck::Global::zoom), 0.0f, -1.0f, 1.0f);

    glm::ivec2 tileset_tile_size(8, 13);

    glUniform2iv(glGetUniformLocation(program->ID(), ""tileset_tile_size""), 1, glm::value_ptr(tileset_tile_size));

    glUniformMatrix4fv(glGetUniformLocation(program->ID(), ""projection""), 1, GL_FALSE, glm::value_ptr(projection));
    glUniformMatrix4fv(glGetUniformLocation(program->ID(), ""model""), 1, GL_FALSE, glm::value_ptr(model)); 

    //glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);

    while(pck::Global::window->is_closed())
    {
        while(pck::Global::window->poll_event() != 0)
        {
            if(pck::Global::event->type == SDL_QUIT)
            {
                pck::Global::window->close();
            }
            else if(pck::Global::event->type == SDL_KEYDOWN)
            {
                if(pck::Global::event->key.keysym.sym == SDLK_ESCAPE)
                    pck::Global::window->close();
            }
        }

        glClearColor(0.2f, 0.3f, 0.3f, 1.0f);
        glClear(GL_COLOR_BUFFER_BIT);

        glBindTexture(GL_TEXTURE_2D, tex_ID);

        program->use();

        glBindVertexArray(VAO);
            glDrawArraysInstanced(GL_TRIANGLES, 0, 6, 24);
        glBindVertexArray(0);

        pck::Global::window->update();
    }

    return 0;
}
","87712","","","","","2017-04-14 03:32:37","Wrong UV calculation","","2","0","","","","CC BY-SA 3.0" "139612","1","139636","","2017-04-05 21:12:41","","0","43","

The game is a 2D platformer.

The problem I have is when I press the B button to crouch from standing, I do not know how to implement a function where B can be pressed again to move back to the standing sprite.

the issue is if i use the code presented below it will check every frame for B == Pressed, basically appearing as if the player never crouched at all. Could somebody help please?

if (facingRightSide == true)
        {
            if (currPad.Buttons.B == ButtonState.Pressed)
            {
                m_currState = AnimState.CrouchRight; 
                isCrouchedRight = true;                                
            }
        }

if (isCrouchedRight == true)
        {
            if (currPad.Buttons.B == ButtonState.Pressed)
            {
                m_currState = AnimState.FacingRight;
                isCrouchedRight = false;
            }
        }

Thank you for your time.

","96514","","","","","2017-04-06 10:01:36","Crouching Up and Down XNA C# Controller","","1","0","","","","CC BY-SA 3.0" "139617","1","139641","","2017-04-06 02:41:35","","0","149","

I'm working on a maze project and I would like to have secret doors. My basic idea is to have the walls be destroyed when the player is both within the bounds of a trigger volume, and presses a button. I know the basic way to implement the code, but my main concern is how to handle the volumes. Should I create a class, maybe called ""SecretController"" that holds the code that says:

""if player is in box[i] and presses fire, destroy door[i]""

This seems like it would work, but I'd rather know before I go create a ton of stuff I'll just have to delete later. Thanks for the help folks!

","86649","","","","","2017-04-06 11:47:24","Using Volumes to reveal secret doors in UE4 C++ project","","1","0","","","","CC BY-SA 3.0" "139626","1","139648","","2017-04-06 07:12:46","","5","6335","

I have a project that was started in Unity 5.5. When I open it in 5.6, most 2D graphics look like they're not anti-aliased: left is 5.6, right is 5.5

What you see there are UI Images in a Canvas. The weird thing is that the red cross image looks OK, but the card with the red outline doesn't.

I've tried setting mipmaps to true, and Anti-aliasing set to 8x or 4x, with no difference. I hoped this was only in the editor, but a build shows the same jaggies.

Has some setting changed in 5.6 that I can edit so my graphics look anti-aliased again, or is this a bug?

","99668","","63208","","2017-04-07 22:06:34","2017-06-25 10:59:33","Is UI anti-aliasing broken in Unity 5.6?","<2d>","2","3","1","","","CC BY-SA 3.0" "139627","1","139631","","2017-04-06 07:58:22","","-3","304","

Okay I know it is not as easy as the title implies but I was wondering: I have an idea of a game I want to play. I recognize it would take a pretty huge development but for the full version I would be willing to pay up to €800, which I know does not cover the development costs in the slightest.

What are my options?

ps: I am not sure if this is the right stackexchange site but I couldn't find a better fit.

","99692","","","","","2017-04-06 09:06:37","""Here's money, make my game""","","3","4","","2017-04-06 09:53:45","","CC BY-SA 3.0" "139663","1","139665","","2017-04-07 00:47:56","","0","846","

I've been trying to get the vector that represents the local rigid body's true forward pointing (z axis, or blue colored) vector. I've been using Debug.DrawLine(...) to try and find out which position vector to use.

Debug.DrawLine(_rb.position, new Vector3(_rb.position.x, _rb.position.y, _rb.position.z + 10), Color.red, .01f, true);
Debug.DrawLine(_tf.position, new Vector3(_tf.position.x, _tf.position.y, _tf.position.z + 10), Color.green, .01f, true);


Debug.DrawLine(transform.forward, new Vector3(transform.forward.x, transform.forward.y, transform.forward.z + 10), Color.blue, .01f, true);
Debug.DrawLine(transform.forward, new Vector3(transform.forward.x, transform.forward.y, transform.forward.z + 10), Color.yellow, .01f, true);

rb.position and tf.position appear to be equivalent, while transform.forward and Vector3.forward are equivalent but are in the world's (x,y,z) 0 position. All four have the same orientation when I change the orientation of the Rigidbody via it's angular velocity property.


Why do you need the local z axis orientation?

I'm implementing a Vehicle in unity without using Wheel Colliers because they don't have the level of control that I want, and their physics are extremely wonky. Being able to obtain a vector representing the z axis orientation makes it very easy to turn the car, since I only have to modify the Rigidbody.angularVelocity function to get it to turn.


Code for reference

For reference, my driving controller code consists of three steps within the FixedUpdate() method.

First is the simple Turning algorithm, which creates a triangle relationship with the front and back wheel of the car and the requested steering angle and then factors this relationship into a rate of change of angular velocity:

if (Input.GetButton(""Right"")) {
  _steerAngle = 45;
  var l = Mathf.Abs(Vector2.Distance(new Vector2(BL.position.x, BL.position.z), new Vector2(FL.position.x, FL.position.z)));
  turningCircleRadius = l / Mathf.Sin(_steerAngle);
  _rb.angularVelocity = new Vector3(_rb.angularVelocity.x, new Vector2(_rb.velocity.x, _rb.velocity.z).magnitude / _turningCircleRadius, _rb.angularVelocity.z);
} else if (Input.GetButton(""Left"")) {
  steerAngle = -45;
  var l = Mathf.Abs(Vector2.Distance(new Vector2(BR.position.x, BR.position.z), new Vector2(FR.position.x, FR.position.z)));
  _turningCircleRadius = l / Mathf.Sin(_steerAngle);
  _rb.angularVelocity = new Vector3(_rb.angularVelocity.x, new Vector2(_rb.velocity.x, _rb.velocity.z).magnitude / _turningCircleRadius, _rb.angularVelocity.z);
} else {
  _steerAngle = 0;
  _rb.angularVelocity = new Vector3(0, 0, 0);
  _turningCircleRadius = 1 / 0f;
}

Then the force exerted by the engine is calculated and then converted into change in velocity:

var latSpeed = new Vector2(_rb.velocity.x, _rb.velocity.z).magnitude;
  if (Input.GetButton(""Accelerate"")) {
    var force = _rb.mass * 10;
    latSpeed += force / _rb.mass * Time.fixedDeltaTime;
  } else if (Input.GetButton(""Brake"")) {
    var force = -1 * 1000 * _rb.mass;
    latSpeed += force / _rb.mass * Time.fixedDeltaTime;
    if(latSpeed <= 0) latSpeed = 0;
  }

Finally, the velocity is modified:

_rb.velocity = new Vector3(transform.forward.x * latSpeed, _rb.velocity.y, transform.forward.z * latSpeed);

As you can see this driving algorithm would work fine on a completely flat surface, but would start to have wonky interactions with hills if the vehicle's pitch were to change.

","55275","","","","","2017-04-07 02:57:42","Obtaining vector representing local z axis orientation","","1","0","","","","CC BY-SA 3.0" "139691","1","141502","","2017-04-08 12:40:35","","1","716","

SpriteKit supports tile maps as of iOS 10 and has a pretty powerful engine. Only, I am having trouble discovering if it is possible, and how to natively create, tile sets directly from a tileset image.

For example, it seems that many other tile map programs (such as Tiled) support uploading a single tileset image with a few inputs (16x16 tiles at 64px for each tile, for example) to create an array of tile textures. The only way that I've found to integrate this with Xcode's new SKTileSet feature is to manually crop each tile into it's own image, upload that image into a separate texture, and access them via name.

Is there a better way to upload a tile map image and access its tiles using Apple's SKTileMapNode engine?

","99763","","","","","2017-05-23 19:58:43","SpriteKit SKTileMapNode with Tileset Integration","","1","0","","","","CC BY-SA 3.0" "139692","1","139695","","2017-04-08 13:01:09","","1","441","

For example, if the user presses the ""Fire"" button which leads to the player character doing some kind of animation, should the client evaluate itself if it can play the animation or wait for a response from the server telling it that it's ok to play the animation?

","","user94720","","","","2017-04-08 14:43:49","Should I sync animations from the server to the client or let the client play its own animations?","","1","0","","","","CC BY-SA 3.0" "139694","1","139698","","2017-04-08 14:13:14","","0","482","

I am creating a simple game using Java. I'm not using any game library. I just want to know if it is okay to call Thread.sleep(40) before calling repaint().

public void run() {
    while(isGameRunning) {
        try {
            Thread.sleep(40);
            repaint();
        }
        catch(Exception e) {
        }
    }
}

or should I use:

private long last_time = System.nanoTime();
private double ns = 1000000000/25D;
private double delta = 0;
@Override
public void run() {
    while(Universe.IsGameRunning) {
        long time  = System.nanoTime();
        delta += (int)(time - last_time)/ns;
        last_time = time;
        System.out.println(delta);
        if(delta>=1) {
            repaint();
            delta--;
        }
    }
}

The code below has more CPU and RAM usage than the first one. Can you explain to me how 'Delta Timing' actually works?

","70254","","71818","","2017-04-08 16:25:29","2017-04-08 16:54:17","Java Thread.sleep() VS Get last and current time","<2d>","1","0","","","","CC BY-SA 3.0" "139703","1","139704","","2017-04-08 18:47:28","","1","341","

I have a camera class that is missing some functionality.

I need to give it the ability to, given a direction, or a point to look at, will rotate the camera left/right and up/down to look at this point or along this direction, without causing gimbal lock.

lets, say I move the camera to (10,10,10).

I want the camera to point at (0,0,0).

By normalizing the vector , I find that the direction I want to look along is (-0.577,-0.577,-0.577). We'll call this ""Forward"".

What I want to be able to do, is compute the local ""Up"" and ""Right"" vectors which are perpendicular to Forward.

How do I do this?

","48612","","","","","2017-04-08 20:05:38","Compute ""up"" and ""right"" from a direction","","1","2","1","","","CC BY-SA 3.0" "139710","1","139733","","2017-04-09 04:02:27","","0","211","

I am trying out sfml and want to draw simple 2D graphic on top of 3D object.

sf::ContextSettings settings;
settings.depthBits = 24;
settings.majorVersion = 4;
settings.minorVersion = 1;//opengl v4.1
settings.attributeFlags =  sf::ContextSettings::Core;

sf::RenderWindow window( sf::VideoMode(1200,800),""sfml"", sf::Style::Default, settings);

sf::CircleShape qs;
qs.setRadius(400);
qs.setPosition(400,400);

while(window.isOpen()){
        sf::Event evnt;
        while (window.pollEvent(evnt)) {
            switch (evnt.type) {
                case sf::Event::Closed:
                    window.close();
                    break;
            }
        }
        glClearColor(0.2f, 0.3f, 0.3f, 1.0f);

        window.draw(qs);

        window.display();
    }

the sf::ContextSettings is set so that I can use openGL to draw 3D graphic. but with this settings, my qs not showing in the window. if I comment out the settings.majorVersion = 4; and settings.minorVersion = 1;, then it appear.

I am rather new and not sure how it works.

  1. Is this possible to be resolved?

  2. Is it correct to think that I want to use sfml for interface graphic and OpenGL for 3D? How do people normally do it?

","45614","","","","","2017-04-09 23:59:40","sfml - graphic not showing once i change the context setting","","1","0","","","","CC BY-SA 3.0" "139713","1","141717","","2017-04-09 06:44:06","","1","152","

I am trying to make a simple card game in which the hand of cards is displayed on a row to the player on screen, and by dragging a single card, all others should move accordingly, with the same speed as the mouse on the X axis only. This is my current code:

public void HandleInput(GameTime gameTime)
{
    _previousMouseState = _currentMouseState;
    _currentMouseState = Mouse.GetState();

    if (_currentMouseState.LeftButton == ButtonState.Released)
    {
        _dragged = false;
    }

    _mouseDown = _currentMouseState.LeftButton == ButtonState.Pressed
        && _previousMouseState.LeftButton == ButtonState.Released;
}

public void Update(GameTime gameTime)
{
    foreach (Card card in _players[0].Cards)
    {
        if (_mouseDown)
        {
            if (card.Area.Contains(_currentMouseState.Position))
            {
                _dragged = true;
            }

            if (_dragged)
            {
                int movementDelta = _previousMouseState.X - _currentMouseState.X;
                Debug.WriteLine(""previousX: "" + _previousMouseState.X 
                    + "" currentX: "" + _currentMouseState.X);

                _players[0].Cards.Select(c =>
                {
                    Rectangle area = c.Area;
                    area.Offset(movementDelta, 0);
                    c.Area = area;
                    return c;
                }).ToList();
            }                
        }
        card.Update(gameTime);
    }
}

HandleInput runs before Update on the game loop. I have implemented a screen manager, but I don't think that is the issue, here. The thing with this code is that the mouse previous X, most of the time, is exactly equal to the current X, and movement is erratic at best (meaning that 90% of the time nothing happens, and the other 10% all cards move on a huge, uncontrollable leap). I have also tried using gameTime in the calculations for the movement delta, but that doesn't work, either. I'm not sure in what other way could I could tackle this (dabbled a bit on using Vector2, but unsuccessfully). I'm kind of noobish to Monogame, but I understand that polling is preferred to event handling here.

","90223","","63208","","2017-04-09 09:01:18","2017-05-28 04:07:01","Moving several rectangles by dragging a single one in Monogame","","2","0","","","","CC BY-SA 3.0" "161736","1","161738","","2018-07-14 14:50:09","","0","36","

I've two objects A and B. A impact B causing B movement.

I want programmatically ""amplify"" B movement mantaining B resulting direction.

I've try adding a script like this

void OnCollisionEnter(Collision col) {
    // Controllo se è un tag 
    if (_forceAlreadyApplied==false) {

        _rb.AddForce (_forcePower * transform.forward, ForceType);
        _forceAlreadyApplied = true;

    }
}

The problem is that transform.forward obviously change B direction.

Maybe this is a wrong approach.

My question is: how to ""amplify"" current speed ?

Thanks

","2494","","","","","2018-07-14 16:21:06","How can I ""amplify"" current motion ?","","1","0","","","","CC BY-SA 4.0" "161743","1","161749","","2018-07-15 00:10:28","","1","105","

With all the gimbal lock problems, quaternions, and stuff like that - why do we even use gimbals for rotation? Why not use local rotation instead of gimbals?

","81902","","81902","","2018-07-15 00:16:42","2018-07-15 07:39:50","Why use gimbals?","","1","1","4","","","CC BY-SA 4.0" "161752","1","161755","","2018-07-15 09:30:40","","2","466","

I tried to give color like below but unity doesn't accept the format spriteRenderer.color = ""#ffffff"".

How can I give a hex color to a SpriteRenderer in Unity?

","118943","","21890","","2018-07-16 13:03:56","2018-07-16 13:03:56","How can I change the color of a SpriteRenderer using a hexadecimal string representation?","","1","0","","","","CC BY-SA 4.0" "161756","1","161776","","2018-07-15 13:19:09","","2","5253","

Unity 2018.2.0 makes the Network class obsolete. I have used ""Network.player.ipAddress"" in my code to get the local LAN IP address.

    internalIP = Network.player.ipAddress;
    externalIP = new WebClient().DownloadString(""http://icanhazip.com"");

What code should replace this? What would be best practice for getting internal and external IP addresses for manual LAN and internet direct connections between server and client?

Internal/external IP addresses are needed so players can load the game, then tell their friends what their IP is to direct connect.

","83633","","83633","","2018-07-15 16:58:10","2019-09-19 12:56:52","Get internal and external IP addresses in Unity 2018.2.0?","","1","4","","","","CC BY-SA 4.0" "161761","1","161762","","2018-07-15 18:20:38","","1","227","

I want to change the default pawn of my game and I decided to do it with c++.

I came up with this solution, which works fine:

static ConstructorHelpers::FClassFinder<APawn> MyPawn(TEXT("" address...""));
        DefaultPawnClass=Pawn.Class;

However, I searched on the internet and found this approach used more often:

static ConstructorHelpers::FObjectFinder<UClass> MyPawn(TEXT("" address...""));
        DefaultPawnClass=(UClass*)Pawn.Object;

What advantages does UObjectFinder have over UClassFinder?

Are there certain circumstances where one is preferable to the other?

","117702","","","user1430","2018-07-17 15:29:28","2018-07-17 15:29:28","UObjectFinder versus UClassFinder; when is one better than the other?","","1","0","1","","","CC BY-SA 4.0" "161771","1","161772","","2018-07-16 05:00:39","","22","2749","

In 3D math I always see matrices with one additional dimension. For example, in 3D graphics, matrices are always 4x4 and in 2d they are 3x3 matrices. Can anyone explain why?

","62441","","7367","","2018-07-16 05:20:22","2018-07-16 15:16:07","Why do transformation matrices always have an extra dimension?","<3d>","1","0","6","2018-07-16 15:19:39","","CC BY-SA 4.0" "161775","1","162627","","2018-07-16 13:01:57","","-1","195","

I have some Fragmentarium code that I would like to convert so it will work in Godot anyone have any ideas how I can do this?

The code is below:

#include ""Progressive2D.frag""

#group Spiral
uniform int StripesNum; slider[1,10,100]
uniform int StripesAng; slider[-180,45,180]
uniform float StripesThinkness; slider[0,0.5,1]
uniform float CenterHole; slider[0,0.125,1]
uniform float OuterHole; slider[0,0.825,1]

const float pi = 3.141592653589793;

vec2 cLog(vec2 z)
{
return vec2(log(length(z)), atan(z.y, z.x));
}

vec3 color(vec2 p)
{
float t = radians(float(StripesAng));
float c = cos(t);
float s = sin(t);
mat2 m = mat2(c, -s, s, c);
vec2 q = m * cLog(p);
return vec3(float
( mod(float(StripesNum) * q.y / (sqrt(2.0) * pi), 1.0) < StripesThinkness
|| length(p) < CenterHole
|| length(p) > OuterHole
));

}

Here's the animation of the code working in Fragmentarium

Here's the animation of moire patterns, I'm trying to recreate this in Godot so I can animate multiple copies of spirals turning at different rates (along with using sliders to adjust variables) to produce different moire patterns.

Note: // Progressive2D.frag This is a utlity / include for the program to set up anti-aliased 2D rendering. Progressive2D.frag code location

","118971","","","","","2018-08-09 22:48:39","Converting code to work in godot with sliders","","1","0","","","","CC BY-SA 4.0" "161777","1","161780","","2018-07-16 14:10:47","","1","181","

I'm trying to implement an arrow at the bottom of the screen on an Android device. This arrow is placed in the center of the screen and is supposed to rotate by a given angle. The problem is that when using the SpriteBatch's draw function for a TextureRegion it rotates the whole image, so the y coordinate also moves up a bit.

I want the arrow to always be in the center of the screen with the x, y coordinates and not move up the y coordinate. I want the y coordinate to stick to the bottom of the screen, but I still need to rotate the arrow. So I don't know if this is possible in my approach to do this.

If you have an approach in LibGDX that is very different from mine. I still want to see it because I don't care how I get this to work, I just want to get it to work somehow.

This far I've found this Rotation - libGDX that helped me get the rotation a little better, but I need an arrow like in the game called Bubble Shooter. Image of Bubble Shooter Arrow

The arrow should be the radius inside of an invisible circle, I guess, is one way to see it. See images below.

Example:

","118973","","118973","","2018-07-16 14:24:49","2018-07-16 14:54:42","Implement Bubble Shooter arrow rotation using LibGDX?","","1","0","","","","CC BY-SA 4.0" "161778","1","161779","","2018-07-16 14:20:10","","1","59","

I'm not sure if this is the right place to ask this sort of question but since there is a google-play tag...

I've just released a new game (my first one actually) two days ago. And I've just noticed that it showed up as an update instead of a freshly released game: Under ADDITIONAL INFORMATION section on the listing page it shows Updated instead of Released. I don't know how that happened and I'm worried it may affect the relevance of the game on the store. Is there any way to fix this issue? Am I right about worrying about the relevance of the game?

Link to the listing page of the game: Google Play link.

EDIT:

To clear any confusion, I'm not talking about people seeing the install/update button when they are about to install/update the game. I'm talking about the game release dateitself. The game shows up as an update from the Google Play Console as if it was released before. Me publishing the game for the first time was interpretted as updating an already released game, why?

","114972","","114972","","2018-07-16 16:48:27","2018-07-16 23:22:53","New game shows up as an update instead of a release","","1","0","","","","CC BY-SA 4.0" "161783","1","161787","","2018-07-16 16:05:28","","1","1783","

I know that UCLASS() macro creates a seperate UClass for every UObject class ,but what is the need for this seperate class , how both of these classes relate to each other and what is the basic difference between both of them??

","117702","","","user1430","2018-07-17 15:57:50","2020-01-25 18:01:51","What's the difference between UClass and UObject?","","2","0","1","","","CC BY-SA 4.0" "161785","1","161820","","2018-07-16 16:23:30","","2","76","

I want to write somewhat randomized object activation effects, like when you step on a trap, you can be tepelorted, damaged, cursed and so on. I applied strategy pattern for this: damage/healing managed by class ChangeHealth, teleportation by class SetPosition etc.

I want to know how can I manage creation of objects in code from data, so, for example I want to be able to set that there will be trap which will damage player and there will be shrine which should teleport you upon activation from, for example, string or bit stream.

I'm working in C#, but interested in overall approach to this problem.

","118979","","118979","","2018-07-16 17:35:53","2018-07-17 16:16:27","Dynamic object creation from data","","1","0","","","","CC BY-SA 4.0" "161790","1","161808","","2018-07-16 21:24:46","","2","151","

Say you have a point on a grid, lets call it P and your have a direction vector called V, how do i find the closest point from P in V's direction ?

Example:

1|2|3
4|P|5
6|7|8

V = (0.5 , 0.5)

In this basic example the next point is 3. How do I make a general algoritm for more complex cases, say V = ( 0.44, -0.56) ?

EDIT: Just to clarify something in my question. The next point has to be a neighbor to point P.

","27206","","27206","","2018-07-17 08:46:56","2018-07-17 12:02:58","Given a point on a 2D grid of discrete points, and a normalized direction vector, how to I find the next point in that direction?","<2d>","2","5","","","","CC BY-SA 4.0" "161792","1","161800","","2018-07-16 22:05:19","","1","162","

So, now I've got into my second problem of implementing the arrow. The rotation works fine from this link but I have to calculate the angle from a touch point to the middle of the arrow placement.

I've already calculated that angle, well if I'm not mistaken. I used a vertical distance that is just the touch points y coordinate since the center y coordinate is 0. For the x coordinate I subtracted the touch point minus the texture regions x coordinate in draw function.

Then to calculate the angle I used arcus tangent inside divided the vertical length by the horizontal length, then multiplied by 180 / PI to get the angle in degrees.

The thing is that this worked a bit like I wanted, but not exactly. Because when taking the touch points coordinates and bottom center coordinates the invisible triangle that is created will be created as a mirrored triangle on the half side of the screen. Meaning when going over to the other side with a finger the angle was negative. Tried using a the math absolute function but then the arrow bounced back when it reached the angle limit.

Draw function:

    @Override
    public void draw(SpriteBatch spriteBatch) {
        spriteBatch.setProjectionMatrix(EtherSky.camera.combined);
        spriteBatch.begin();
        spriteBatch.draw(textureRegion,
            x - textureRegion.getRegionWidth() / 2, y,
            textureRegion.getRegionWidth() / 2, 0, 
            textureRegion.getRegionWidth(), 
            textureRegion.getRegionHeight(), 
            1.0f, 1.0f, degrees);
        spriteBatch.end();
     }

Input function:

    @Override
    public void input() {

    /*
    if (Gdx.input.getAccelerometerX() > 3 && degrees < 90) {
        degrees++;
    }
    else if (Gdx.input.getAccelerometerX() < -3 && degrees > -90) {
        degrees--;
    }*/

    // Distance from arrow bottom center point to touch point
    float distance = (float)Math.sqrt(
                      Math.pow((inputManager.touchPoint.x - (x - textureRegion.getRegionWidth() / 2)), 2) +
                      Math.pow((inputManager.touchPoint.y - 0), 2));

    float verticalDistance = inputManager.touchPoint.y;
    float horizontalDistance = inputManager.touchPoint.x - (x - textureRegion.getRegionWidth() / 2);
    float angle = (float)Math.abs(Math.atan(verticalDistance / horizontalDistance) * 180 / Math.PI);

    if (Gdx.input.isTouched()) {
        degrees = angle;
    }
    System.out.println(""Player ("" + Gdx.input.getX() + "", "" + Gdx.input.getY() + "")"");
    System.out.println(""Degrees: "" + degrees + "" Angle: "" + angle);
    }

The black dots on the image represents touch points and this is what the invisible triangles would look like if I'm not imagining wrongly. Example image:

UPDATED CODE

    float verticalDistance = inputManager.touchPoint.y;
    float horizontalDistance = inputManager.touchPoint.x - x; // (x - textureRegion.getRegionWidth() / 2)
    float angle = (float)Math.toDegrees(Math.atan2(verticalDistance, horizontalDistance)) - 90;

    if (Gdx.input.isTouched()) {
            degrees = angle;

            if (degrees < 90)
                degrees = 89;
    }

I tried to do this but it doesn't work because then I get the arrow stuck at 89 degrees forever. Notice that this is not the arrow pointing straight up, it's the arrow pointing to the left.

I am aware why the code doesn't work but I don't know how to get it to work.

UPDATED CODE FOR FUTURE READERS (ANSWER)

private void inputFollowFinger() {
        float verticalDistance = inputManager.touchPoint.y;
        float horizontalDistance = inputManager.touchPoint.x - x;
        float angle = (float)Math.toDegrees(Math.atan2(verticalDistance, horizontalDistance)) - 90;

        if (Gdx.input.isTouched()) {
            degrees = angle;
            arrowStayAt90Degrees();
        }
    }

private void arrowStayAt90Degrees() {
    final int stayAtDeg = 90;
    final int adjustDeg = 180;
    if (degrees < -adjustDeg) {
        degrees = stayAtDeg;
    }
    else if (degrees < -stayAtDeg) {
        degrees = -stayAtDeg;
    }
}
","118973","","118973","","2018-07-18 01:06:13","2018-07-18 01:06:13","Dragging arrow around using touch point Bubble Shooter libGDX?","","1","2","","","","CC BY-SA 4.0" "161796","1","161802","","2018-07-16 23:45:53","","1","592","

So I have a tower surrounded by 4 spawner gameobjects. I want to draw lines to all of them using LineRenderer. I wrote the following code to achieve this but for some reason I am getting NullReferenceException: Object reference not set to an instance of an object at the line myLines[i].transform.position = startPos;

public Transform towerPos;
GameObject[] myLines;

void DrawLines()
        {
            myLines = new GameObject[spawnpoints.Count];
            Vector3 startPos = towerPos.transform.position;
            for (int i = 0; i < spawnpoints.Count; i++)
            {

                myLines[i].transform.position = startPos;
                myLines[i].AddComponent<LineRenderer>();
                LineRenderer lr = myLines[i].GetComponent<LineRenderer>();
                lr.material = new Material(Shader.Find(""Particles/Alpha Blended Premultiply""));
                lr.SetColors(Color.red, Color.red);
                lr.SetWidth(0.3f, 0.3f);
                lr.SetPosition(0, startPos);
                lr.SetPosition(1, spawnpoints[i].spawnPoint.transform.position);
                //GameObject.Destroy(myLine, duration);
            }
        }

The above function is called at Start(). The tower GameObject is dragged into the towerPos variable in the editor so dont know why its a null.

","106622","","106622","","2018-07-17 00:01:54","2018-07-17 01:40:37","Trying to draw lines using LineRenderer from tower to spawners","","1","6","1","","","CC BY-SA 4.0" "161824","1","161835","","2018-07-17 17:37:59","","1","100","

I know circle formula and know how to draw it. but I want put 4 points on it with equal distance from each other. how can I do that?

I work with unity.

","62441","","","","","2018-07-18 13:48:18","how to put elements around circle with equal distance?","","1","2","","","","CC BY-SA 4.0" "161826","1","161828","","2018-07-17 18:08:11","","0","3343","

I am trying to find a gameobject, which is not active in the current scene.It throws Null Reference error.

GameObject gameObject;
void start(){
//a non active gameobject tagged with InactiveTag
gameObject = GameObject.FindGameObjectWithTag(""InactiveTag""); 
}
void update(){
if(someCondition){
gameObject.setActive(true); //this line throws Null Reference error
}
}
","118943","","40264","","2018-07-17 18:57:49","2020-04-17 07:44:15","How to find a non active gameobject in unity?","","3","0","","","","CC BY-SA 4.0" "161832","1","161833","","2018-07-17 19:41:17","","0","1814","

My settings.json (generated with JsonUtility.ToJson):

{
    ""masterVolume"": 0.10000000149011612,
    ""fullScreenMode"": 1,
    ""cheatsEnabled"": false
}

Offending line:

SettingsData sd = JsonUtility.FromJson<SettingsData>(Application.dataPath + ""/Settings/settings.json"");

My SettingsData-class.

[System.Serializable]
public class SettingsData
{
    [SerializeField]public float masterVolume;
    [SerializeField]public int fullScreenMode;
    [SerializeField]public bool cheatsEnabled;

    public SettingsData(float masterVolume, FullScreenMode fullScreenMode, bool cheatsEnabled)
    {
        this.masterVolume = masterVolume;
        this.fullScreenMode = (int)fullScreenMode;
        this.cheatsEnabled = cheatsEnabled;
    }
}

I really can't fathom why it's having trouble reading this back in, I even tried to save the enum as an int instead, but still no luck.

","108217","","","","","2018-07-17 19:54:53","JSON Parse (Invalid Value)","","1","0","","","","CC BY-SA 4.0" "161836","1","161872","","2018-07-17 20:34:46","","0","167","

In the screenshot group of 4 agents (in red) are staying on the waypoint(blue) and they just rotating on the waypoint. The other agents keep moving the waypoints.

I'm creating waypoints and agents (NavMeshAgent):

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class InstantiateObjects : MonoBehaviour
{
    public GameObject prefab;
    public Terrain terrain;
    public float yOffset = 0.5f;
    public int objectsAmount;
    public bool parent = true;
    public bool randomScale = false;
    public string tag;
    public string name;

    public Vector3 RandScaleMin;
    public Vector3 RandScaleMax;

    private float terrainWidth;
    private float terrainLength;
    private float xTerrainPos;
    private float zTerrainPos;
    private GameObject clonedObject;
    private ObjectPool objectPool;

    public void Start()
    {
        //Get terrain size
        terrainWidth = terrain.terrainData.size.x;
        terrainLength = terrain.terrainData.size.z;

        //Get terrain position
        xTerrainPos = terrain.transform.position.x;
        zTerrainPos = terrain.transform.position.z;

        generateObjectOnTerrain();
    }

    public void Update()
    {

    }

    public void ReleaseObjects()
    {
        GameObject[] allobj = GameObject.FindGameObjectsWithTag(tag);
        for (var i = 0; i < allobj.Length; i++)
        {
            objectPool.ReturnInstance(allobj[i]);
            allobj[i].hideFlags = HideFlags.HideInHierarchy;
        }
        generateObjectOnTerrain();
    }

    public void generateObjectOnTerrain()
    {
        objectPool = new ObjectPool(prefab, objectsAmount);

        for (int i = 0; i < objectsAmount; i++)
        {
            //Generate random x,z,y position on the terrain
            float randX = UnityEngine.Random.Range(xTerrainPos, xTerrainPos + terrainWidth);
            float randZ = UnityEngine.Random.Range(zTerrainPos, zTerrainPos + terrainLength);

            float yVal = Terrain.activeTerrain.SampleHeight(new Vector3(randX, 0, randZ));

            var randScaleX = Random.Range(RandScaleMin.x, RandScaleMax.x);
            var randScaleY = Random.Range(RandScaleMin.y, RandScaleMax.y);
            var randScaleZ = Random.Range(RandScaleMin.z, RandScaleMax.z);
            var randVector3 = new Vector3(randScaleX, randScaleY, randScaleZ);

            //Apply Offset if needed
            yVal = yVal + yOffset;

            clonedObject = objectPool.GetInstance();

            if (randomScale == true)
                clonedObject.transform.localScale = randVector3;//new Vector3(randScaleX, randScaleY, randScaleZ);

            if (parent)
            {
                GameObject parent = GameObject.Find(name);
                clonedObject.transform.parent = parent.transform;
            }

            clonedObject.tag = tag;
            clonedObject.transform.position = new Vector3(randX, yVal, randZ);
        }

        if (prefab.name == ""AgentPrefab"")
            AgentsComponents.StartInit();
    }
}

Then adding two components to each agent:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.AI;

public class AgentsComponents
{
    private static GameObject[] objectsfound;

    public static void StartInit()
    {
        objectsfound = GameObject.FindGameObjectsWithTag(""Agent"");

        for (int i = 0; i < objectsfound.Length; i++)
        {
            objectsfound[i].AddComponent<NavMeshAgent>();
            objectsfound[i].AddComponent<AgentControl>();
        }
    }
}

And the AgentControl script:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.AI;

public class AgentControl : MonoBehaviour
{
    public List<Transform> points = new List<Transform>();
    private int destPoint = 0;
    private NavMeshAgent agent;

    void Start()
    {
        agent = GetComponent<NavMeshAgent>();

        var agentsDestionations = GameObject.FindGameObjectsWithTag(""Waypoint"");

        for (int i = 0; i < agentsDestionations.Length; i++)
        {
            points.Add(agentsDestionations[i].transform);
        }

        // Disabling auto-braking allows for continuous movement
        // between points (ie, the agent doesn't slow down as it
        // approaches a destination point).
        agent.autoBraking = true;

        agent.speed = Random.Range(10, 50);

        GotoNextPoint();
    }

    void GotoNextPoint()
    {
        // Returns if no points have been set up
        if (points.Count == 0)
            return;

        // Set the agent to go to the currently selected destination.
        agent.destination = points[destPoint].position;

        // Choose the next point in the array as the destination,
        // cycling to the start if necessary.
        destPoint = (destPoint + 1) % points.Count;
    }


    void Update()
    {
        // Choose the next destination point when the agent gets
        // close to the current one.
        if (!agent.pathPending && agent.remainingDistance < 0.5f)
            GotoNextPoint();
    }
}

I set autoBraking to true since I want the agents to slow down when getting close to each waypoint.

What I want to do is to make all the agents to move between all the waypoints. Either is there are 10 agents or 500 some of them stay at some waypoint and not continue to the next one. I waited like 2-3 minutes and they didn't continue.

I didn't change anything in the NavMeshAgent properties except the autoBraking set it to true.

","115657","","","","","2018-07-19 00:09:24","Why some of the agents are keep staying in the waypoint and not continue to the next waypoint/s?","","1","0","","","","CC BY-SA 4.0" "103198","1","104640","","2015-06-30 11:59:36","","4","534","

I'm trying to debug some textures and FBO's with Nvidia Nsight 4.6 VS Edition. But when I select either ""Start CUDA debugging"" or ""Start graphics debugging"" I get an error.

""The program can't start because glew32.dll is missing for your computer. Try reinstalling theprogram to fix this problem""

The application runs just fine when I'm not using Nsight. What might be my problem?

My system; Windows 7 x64 bit. Nsight 4.6 x64 bit. GTX 580 with lastest drivers. OpenGL version 3.3. Building a Win32 application. (Tried to change the build target to x64 but that just resulted in a bunch of linking errors for glfw and glew)

","67770","","","","","2015-07-27 08:36:59","Nvidia Nsight 4.6 VS Edition. The Graphics debugger can't find glew32.dll","","1","2","","","","CC BY-SA 3.0" "103212","1","103217","","2015-06-30 14:50:39","","-2","1385","

I have my following (static) KeyListener http://pastebin.com/gR1i3Xzb

and this is my update Method:

@Override
  public void update()
  {
    input();
    /*...*/
  }

  private void input()
  {
    if (KeyBinding.downDown)
    {
      nextEntry();
    }
    else if (KeyBinding.upDown)
    {
      previousEntry();
    }
  }

someone has an idea to make single presses?

Let's say I have a menu. If I press the Down key for a short amount of time, many menu entries are skipped. But I want to take the next menu entry only once.

","57681","","","","","2015-06-30 15:09:42","single Key press","","1","0","","2015-07-08 15:21:32","","CC BY-SA 3.0" "103215","1","103218","","2015-06-30 15:03:26","","5","1527","

I try to make a game in Unity 2d. But when I drop the image in Unity the image size increases 3 or 4 times. Can I fix that or is this normal?

","55415","","","user54211","2015-07-13 03:16:25","2015-07-13 03:16:25","Why does Unity increase the size of image when I use the image in Unity?","<2d>","1","0","1","","","CC BY-SA 3.0" "103223","1","103236","","2015-06-30 15:51:13","","2","798","

I have an object in my game that, when clicked, causes the player to move toward it. It does this by sending its position to the player's SetTarget function. The player's Move function then heads toward the position.

This is working, but when I duplicate the object and click on it, the player still moves toward the original object's position. When I print debugging statements to the console, it appears that both positions are being calculated, but the original position is the last one and wins out. (I'm using the object's this.transform.position to send the position to the player's SetTarget function.)

Can I not duplicate objects like this and expect the script to work? Do I need to use a prefab? If so, how? I know how to create prefabs but am not sure what I would need to change with the script or other settings when they are created. Thanks.

Here's the code:

public class PotController : MonoBehaviour {

private GameObject chefObject;

void Start () {
    chefObject = GameObject.Find (""Chef"");
}

void Update() {
    if(Input.GetMouseButtonDown(0)){
        //print (""pot clicked!"");
        ChefController chefScript = chefObject.GetComponent<ChefController> ();

        print (this.gameObject.transform.position);
        chefScript.SetTheTarget (this.gameObject.transform.position);
    }
}
}

public class ChefController : MonoBehaviour {

private Vector3 target;
public float speed = 2;

// Use this for initialization
void Start () {
    target = this.transform.position;
}

private void MoveTowardsTarget() {
    Vector3 targetPosition = new Vector3(0,0,0);
    targetPosition = target;
    Vector3 currentPosition = this.transform.position;
    //check distance to target
    if(Vector3.Distance(currentPosition, targetPosition) > 0.4f) { 
        Vector3 directionOfTravel = targetPosition - currentPosition;

        directionOfTravel.Normalize();

        this.transform.Translate(
            (directionOfTravel.x * speed * Time.deltaTime),
            (directionOfTravel.y * speed * Time.deltaTime),
            (directionOfTravel.z * speed * Time.deltaTime),
            Space.World);
    }
}

void Update () {
    MoveTowardsTarget ();           
}

public void SetTheTarget (Vector3 position){
    target = position;
}

}
","67993","","67993","","2015-06-30 22:26:25","2015-07-01 13:40:22","Unity: How to make script work with duplicated objects?","","1","3","","","","CC BY-SA 3.0" "103229","1","104272","","2015-06-30 19:19:58","","1","272","

After asking a similar question yesterday I've come across another problem with using joints in box2D / LOVE and trying to create a weighted chain.

Everything is set up as follows, I've tried to remove most of the fluff:

Each link is created as a body/shape pair then joined together and added to a links table.

    for i = 1, segments, 1 do

        link = {}

        link.body = love.physics.newBody(world, xpos, ypos, ""dynamic"")          

        if (i == segments) then
            link.shape = love.physics.newCircleShape(endlink_radius) --Ending link
        else
            link.shape = love.physics.newCircleShape(link_radius) 
        end

        link.fixture = love.physics.newFixture(link.body, link.shape) --Fix bodies to shapes

        table.insert(links,link) 

        ypos = ypos + link_distance

    end

Links are joined together using rope joints to allow for some springy-ness:

    for i = 2, #links, 1 do

        x1,y1 = links[i-1].body:getPosition()
        x2,y2 = links[i].body:getPosition()
        links[i-1].join = love.physics.newRopeJoint(links[i-1].body, links[i].body, x1, y1, x2, y2, link_distance, true )

    end

The player controls the first element of the chain and holds the table of links, its created as follows:

local body = love.physics.newBody(world, 300, 100, ""kinematic"")
local shape = love.physics.newCircleShape(3)
local fixture = love.physics.newFixture(body, shape)
local chain = _chain.new(world)

Joined to the chain:

x1,y1 = body:getPosition()
x2,y2 = links[1].body:getPosition()
join = love.physics.newRevoluteJoint(body, links[1].body, x1, y1, x2, y2, true )

The player tracks the mouse cursor, determines a velocity and moves towards it by setting the linear velocity of the first chain element:

    body:setLinearVelocity(velocity.x, velocity.y)

And this very nearly works, you can pull the chain around quite nicely using the mouse (directing the top pink element), however if you keep spinning the chain it begins to stretch and pull away from the first element:

Any ideas on how to solve this?

I've tried using different joints and parameters, adding a reinforcing joint from the first link to the last, adjusting body weights and altering the update iterations but can't seem to find anything effective.

Many thanks.

","68000","","-1","","2017-04-13 12:18:51","2015-07-20 10:17:37","LOVE Physics - Joint Stretching","","1","0","2","","","CC BY-SA 3.0" "103230","1","108801","","2015-06-30 19:35:21","","2","658","

I have a global light in my scene. It casts shadows using shadow mapping and has an associated camera (for rendering to the shadow map). I'm going to refer to it as my ""shadow camera"" from now on.

I need to find a way to place my shadow camera's near plane as close as possible to my scene's bounding box (clip it to the scene bounds).
I need to do this so the shadow casters are never clipped by the shadow camera's near plane (otherwise I'd get holes inside of shadows) and to make sure I don't accidentally cull any shadow casters behind the camera. This would also allow me to increase the shadow mapping precision, because it lets me move the near and far planes closer together.

Example 1 (possible to do using a simple plane check):

Example 2 (NOT possible to do using a simple plane check):

  • The black box is the scene's AABB (but it would be nice if this would work for OBBs or other shapes too).
  • The yellow arrow represents the light direction.
  • The green box is the shadow camera's frustum without any modifications.
  • The red box is my desired result.

At the moment I'm constructing the red box by projecting the black box onto the global light's direction vector and use the closest vertex's distance to compute the shadow camera's near plane. But this makes it impossible to get something as seen in the 2nd image. Instead the red box starts above the scene's AABB.
I have thought of using SAT for this, but it doesn't seem to be the solution.

","24009","","24009","","2015-07-01 09:23:47","2016-05-31 02:07:02","Clip shadow frustum to scene bounds","","2","0","","","","CC BY-SA 3.0" "103237","1","103238","","2015-07-01 00:13:32","","0","208","

I have been following this tutorial to try and start learning OpenGL. However, upon compiling my code, my triangle turns out to be black. At first I thought that there was something wrong with the fragment shader, but when I tried hard-coding x-values for the vertex shader, I noticed that it had no effect (notice the 0.7f x-value on the vertex shader).

main.cpp

#define GLFW_DLL
#include <iostream>
#include <fstream>
#include <GL/glew.h>
#include <GLFW/glfw3.h>



GLfloat verticies[] = {
    -1.0f, -1.0f, 0.0f,
    0.0f, 1.0f, 0.0f,
    1.0f, -1.0f, 0.0f
};

// Objects
GLuint VBO;
GLuint VAO;
GLuint vertShader;
GLuint fragShader;

void key_callback(GLFWwindow* window, int key, int scancode, int action, int mode)
{
    if(key == GLFW_KEY_ESCAPE && action == GLFW_PRESS)
    glfwSetWindowShouldClose(window, GL_TRUE);
}

std::string readFile(const char* filePath)
{
    std::string content;
    std::ifstream fileStream(filePath);

    if(!fileStream.is_open())
    {
    std::cout << ""Failed to open "" << filePath << ""."" << std::endl;     
    return """";
    }

    std::string line = """";
    while(!fileStream.eof())
    {
    std::getline(fileStream, line);
    content.append(line + ""\n"");
    }

    fileStream.close();
    return content;
}

void createShader(const GLenum shaderType, const char* shaderPath, GLuint& shaderObject)
{
    // Create the shader.
    shaderObject = glCreateShader(shaderType);

    // Read the shader code from the shader file.
    const GLchar* shaderCode = readFile(shaderPath).c_str();

    // DEBUG:
    std::cout << shaderCode << std::endl;

    // Assign the shader code to the shader object.
    glShaderSource(shaderObject, 1, &shaderCode, NULL);

    // Compile the shader.
    glCompileShader(shaderObject);

    // Check to see if it compiled successfully.
    GLint success;
    GLchar infoLog[512];
    glGetShaderiv(shaderObject, GL_COMPILE_STATUS, &success);
    if(!success)
    {
    glGetShaderInfoLog(shaderObject, 512, NULL, infoLog);
    std::cout << ""ERROR::SHADER::VERTEX::COMPILATION_FAILED\n""
          << infoLog
          << std::endl;

    }
}

int main(int argc, char *argv[])
{
    // Initialize GLFW.
    glfwInit();
    glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
    glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
    glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
    glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);

    // Create OpenGL window. Exit if fails.
    GLFWwindow* window = glfwCreateWindow(800, 600, ""LearnOpenGL"", nullptr, nullptr);
    if (window == nullptr)
    {
    std::cout << ""Failed to create GLFW window"" << std::endl;
    glfwTerminate();
    return -1;
    }

    // Make window current
    glfwMakeContextCurrent(window);

    // Initialize GLEW.
    glewExperimental = GL_TRUE;
    if (glewInit() != GLEW_OK)
    {
    std::cout << ""Failed to initialize GLEW"" << std::endl;
    return -1;
    }

    // Set viewport size and position.
    glViewport(0, 0, 800, 600);

    // Set callback functions.
    glfwSetKeyCallback(window, key_callback);

    // Set clear color state.
    glClearColor(0.5f, 0.5f, 0.5f, 1.0f);


    // Create vertex and fragment shaders.
    createShader(GL_VERTEX_SHADER, ""../src/shaders/vertex.vert"", vertShader);
    createShader(GL_FRAGMENT_SHADER, ""../src/shaders/fragment.frag"", fragShader);

    // Attach then link shaders
    GLuint shaderProgram;
    glAttachShader(shaderProgram, vertShader);
    glAttachShader(shaderProgram, fragShader);
    glLinkProgram(shaderProgram);

    // Delete the shaders
    glDeleteShader(vertShader);
    glDeleteShader(fragShader);

    // Check if it succeeded.
    GLint linkSucceeded;
    glGetProgramiv(shaderProgram, GL_LINK_STATUS, &linkSucceeded);
    if(!linkSucceeded)
    {
    GLchar infoLog[512];
    glGetProgramInfoLog(shaderProgram, 512, NULL, infoLog);
    std::cout << ""ERROR::SHADER::PROGRAM::LINK_FAILED\n""
          << infoLog
          << std::endl;
    }

    // Generate the VAO and bind it.
    glGenVertexArrays(1, &VAO);
    glBindVertexArray(VAO);

    // Create VBO and bind to it.
    glGenBuffers(1, &VBO);
    glBindBuffer(GL_ARRAY_BUFFER, VBO);

    // Send verticies to buffer's memory.
    glBufferData(GL_ARRAY_BUFFER, sizeof(verticies), verticies, GL_STATIC_DRAW);

    // Set up vertex attributes
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(GLfloat), (GLvoid*)0);
    glEnableVertexAttribArray(0);

    // Unbind VBO
    glBindBuffer(GL_ARRAY_BUFFER, 0);

    // Unbind VAO
    glBindVertexArray(0);

    // Game loop.
    while(!glfwWindowShouldClose(window))
    {
    // Get input.
    glfwPollEvents();

    glClear(GL_COLOR_BUFFER_BIT);
    glUseProgram(shaderProgram);
    glBindVertexArray(VAO);
    glDrawArrays(GL_TRIANGLES, 0, 3);
    glBindVertexArray(0);

    // Swap buffers.
    glfwSwapBuffers(window);
    }

    // De-allocate resources.
    glDeleteVertexArrays(1, &VAO);
    glDeleteBuffers(1, &VBO);

    // Terminate and close.
    glfwTerminate();
    return 0;
}

Vertex Shader

#version 330 core

layout(location = 0) in vec3 position;

void main()
{
    gl_Position = vec4(0.7f, position.y, position.z, 1.0f);
}

Fragment Shader

#version 330 core

out vec4 color;

void main()
{
    color = vec4(1.0f, 0.5f, 0.2f, 1.0f);
}
","60405","","","","","2015-11-29 20:13:43","OpenGL Shaders Ignored","","1","0","","","","CC BY-SA 3.0" "103242","1","103243","","2015-07-01 01:56:43","","3","3505","

My goal is to program a camera to point towards the mouse cursor. I attached the following script to the Main Camera.

using UnityEngine;
using System.Collections;

public class CameraController : MonoBehaviour {

    public float sensitivity;

    // Update is called once per frame
    void Update () {
        transform.Rotate (Input.GetAxis (""Mouse Y"") * -1 * sensitivity, Input.GetAxis (""Mouse X"") * sensitivity, 0);
    }
}

I ""told"" the camera to rotate around the x and y axes. Whenever it does this, the camera tilts sideways. When I checked the rotation of Main Camera, I saw that the z rotation had changed. Why is it rotating around the z axis when I left it at 0?

","67961","","67961","","2015-07-01 02:07:46","2017-07-04 17:51:26","Why is the camera tilting around the z axis when I only specified x and y?","","2","2","2","","","CC BY-SA 3.0" "103254","1","103264","","2015-07-01 10:20:08","","4","1607","

Is there a way that I can add shadow effect on text drawn in monogame? Because you cannot add shadow effect when creating a new spritefont file in Monogame Content Pipeline.

Thanks.

","43179","","","","","2015-07-01 13:12:03","Adding shadow and other effects on a spritefont Monogame","","1","0","","","","CC BY-SA 3.0" "103258","1","103259","","2015-07-01 12:13:18","","1","314","

I'm currently developing a game where players will be able to battle each other, not in real time, but offline, in a way where both players doesn't have to be online. This might seem like a bad idea, but it makes sense in the context of the game.

Right now I'm trying to create some kind of ranking system, where the outcome of the match only affects the player who initiated the match, since I don't want players loosing rank when they're not playing.

A simple idea would be to award the player 2 points for winning and take away 1 for loosing.

I honestly have no idea if this design would work, so I come here to get some suggestions on how to design system like this. Thank you.

","68065","","","","","2015-07-07 16:38:37","Ranking system for ""offline"" game?","","2","6","","","","CC BY-SA 3.0" "103277","1","103594","","2015-07-01 18:37:29","","4","462","

I am creating a 2D game using OpenGL. For sprites, I use textured quads (actually two triangles). The texures contain transparent pixels, since objects are not always perfectly rectangular.

How do I do collision detection on the objects, not the quads? I was thinking to first check the quads for collision and if they match, check the textures.

But how can I check if two non-transparent pixels are on top of each other (or next to each other) for the two objects?

Or is there a completely different way of how this is done best?

","68078","","7804","","2015-07-01 19:39:43","2015-07-08 01:37:46","How do I check for collision between transparent textures?","","1","5","1","","","CC BY-SA 3.0" "103279","1","103286","","2015-07-01 18:56:14","","0","68","

I am creating a 2D (2.5D) game using OpenGL and orthographic projection.

It is simple to have relatively flat objects, e.g. characters. I simply use a quad with a texture of the character and move that about.

However, what is the best way to draw big objects that have depth, e.g. a big house? Do I use one quad with a three dimensional looking represantation of the house on it, or do I use multiple quads (e.g. front, side, top)?

I prefer using one quad with a three dimensional looking texture on it. What are the drawbacks to this approach?

","68078","","68078","","2015-07-01 19:01:42","2015-07-01 23:10:41","How to display three dimensional Objects in a 2D Game using OpenGL and orthographic Projection?","<2d>","1","2","","","","CC BY-SA 3.0" "103289","1","103595","","2015-07-01 23:38:53","","2","2366","

I am trying to change the name of the sub-sprites of a sprite sheet. I have searched and tried everything I can think of and cannot get it to work, which makes me think it can't be done at this point in time.

Let's say I have a sprite sheet that has already been split up into individual sprites inside of Unity.

I am attempting to change the sub-sprite names programmatically.

I tried using

AssetDatabase.RenameAsset (...)

on the sprite sheet asset, which only changes the sprite sheet name and not sub-sprites.

I then thought to obtain a sub-sprite and attempt to change its name using the below code.

            if (AssetDatabase.IsSubAsset (subSprite))
            {
                AssetDatabase.RenameAsset (AssetDatabase.GetAssetPath (subSprite), ""newSprite"" + i.ToString ());
            }

However, this too only changes the sprite sheet name.

I'm not sure where else I can take this to achieve my desired outcome, apart from programmatically copying the original sprite sheet and using that to create a copy. But even then, I still am unable to alter the sub-sprite names.

I also tried changing subSprite.name, but this just changes the internal name and not the asset name.

Any ideas?

","24570","","24570","","2015-07-07 23:44:14","2015-07-13 23:00:46","How to programmatically change sprite sheet sub-sprite name","","2","5","0","","","CC BY-SA 3.0" "103294","1","103296","","2015-07-02 02:33:52","","2","274","

I'm making a map in Tiled.

I quickly ran out of room in the north of my map and would like to ""shift"" the tiles down. I'd prefer not to have to redo each of the tile layers.

Is this possible?

This is a picture of my minimap. I'd like to move them down around the red square.

","29842","","","","","2015-07-02 02:52:36","Tiled - move all tiles","","1","0","","","","CC BY-SA 3.0" "91838","1","91853","","2015-01-07 08:13:41","","0","1098","

I'm intermediate in Java but novice in everything which is on the server-side. I've set myself a task to rewrite an old game, so learning Java is more fun. It is a space turn-based strategy. The original was called Stars! (I heard that Master of Orion is somewhat similar). Basically players assign their orders to ships and planets and then submit a turn to server, where a new turn is generated.

I suppose, I will have a lot of objects (ships, fleets, planets and so on), each of them will have some properties (XY position, orders, HP etc.). When generating a turn, the server will have to cycle through all the objects and perform tasks with each one (move a ship, unload a cargo for instance).

The question is: How should I organize entities, ships for example, so it is easy to process them. Should I store them in a database? Perhaps there is some sort of a best practice for tasks like this?

If I assume, that ships will be stored in a database. Then, each player will send a file with orders to the server. The server will put everything into a database. When all the players have submitted their turns, the server will cycle through everything in the database and generate a turn. Will this be a good way to do this?

","58998","","","","","2015-01-07 17:37:03","Organising data of a turn-based strategy","","1","0","","","","CC BY-SA 3.0" "91863","1","91880","","2015-01-07 20:54:07","","2","277","

I've always had a hard time wrapping my head around the 'high levels' of game logic and where/how large components, such as collision detection/physics, rendering, and user input, interact with one another. What I'm asking is how you deal with these higher level interactions.

Here is what I am doing:

interface Stage() { //or a ""scene"" as many call it. Manages the logic for Actors
   init()
   update() //update this stage
   render() //render this stage
}

StageManager() implements Stage { //a stage that manages stages :)
   var stages = array<Stage>
   init() //create a MainStage & other stages that may be used in this specific game
   update() //logic for checking which stage should be active
   render() //tell the active stage to render
   getActiveStage()
}

MainStage() implements Stage {
   var mainCharacter
   var actors = array<GameObject>
   var ...
   init()   //create the player, game world, other initialization stuff
   update() //collision detection, check mainCharacter state, etc
   render() //render all the Actors on this stage
   handleMouseClick()
   handleKeyBoardEvent()
}


Main() { //created on game start
   var manager = new StageManager()
   updateGameState() {
      manager.update()
      loop
   }

   renderState() {
      manager.render()
      loop
   }

My main gripe with this is that the StageManager and all other Stages will become enormous and unwieldy as the game scales over time. Off the top of my head, some components can be delegated out such as a Stage uses a PhysicsManager, but there would still be all the keyboard events and what not.

I was thinking of making individual Actors listen to events and giving them an update() method, thus encapsulating what an Actor should do at any given point. However, I would run into complications when an Actor needed game-state information (game time, a ""power-up"" Actor needing the player's speed) and I would no longer have Stages as Actors are now managing themselves.

","59021","","","","","2015-01-08 03:07:00","Where does game logic belong?","","2","1","","2015-01-18 10:11:36","","CC BY-SA 3.0" "91872","1","93071","","2015-01-07 23:30:12","","0","1937","

I'm trying to sort out some timing issues within my gameloop and I've seen something that I don't understand.

The Nexus 10 is supposed to have (as far as I'm aware) a VSync on 60Hz. So that would be mean would it not that onDrawFrame is called by the system every 16.66ms?

When I try to measure it I get different readings on every iteration like so:

public void onDrawFrame(GL10 gl) {  

        Log.v(""NewTag"",""Millis between this call and last: ""+((System.nanoTime()-newTime)/1000000));

        newTime = System.nanoTime();
}

See my results:

As you can see, it varies between 11.53ms and 21.85ms

Why is it not a constant 16.66 or is it just that nanoTime isn't acurate enough to give a more exact reading?

","22241","","","","","2015-04-29 08:06:39","Accurately measure time between calls to onDrawFrame (Android OpelGL ES 2.0)","","2","2","","","","CC BY-SA 3.0" "91875","1","93345","","2015-01-08 01:47:08","","5","795","

I understand how properly connected clients in a lockstep model deal with lag but what about the lagger? How does the lagger know that he or she is lagging? Should I continuously ping the central server?

Also, as the game follows the lockstep model, how does the lagger sync if he needs to? Should the desync be altogether prevented by fast pinging?

","57906","","","","","2015-02-04 04:57:27","Lockstep dealing with lag","","1","4","","","","CC BY-SA 3.0" "91877","1","91887","","2015-01-08 02:13:48","","10","1636","

I'm making a game (or planning to, at least) and to do that, I need a way to automatically generate names for the NPC ""bosses"" (long explanation and irrelevant here). Something like this is a good example of what I mean.

I have an idea that I can just build a database of names by nationality, maybe first/last pairs, and assign them randomly, with an ignored names list so I don't get something like Homer Simpson and get sued or something.

The problem with that is that I'd need to build up a massive database of names for that to work. It would either take forever or cost money, unless someone has a list of names available for free already.

I have another idea where I make random pairs of vowels and consonants, flip a few, and add them together, but a quick program that does that generated names like these:

  • Seermeecpa
  • Cime
  • Ofmiahwumafi
  • Gozidi
  • Effawided

(For anyone interested in the code, you can see it here)

These are... kind of a mouthful. Well, except ""Gozidi"" -- that one could work. Still, the success rate is clearly not very good.

Is there anything I could do to make the names sound nicer (see below), or should I start making that list? Can I somehow mimic the way humans decide if a name is decent or not, with at least some accuracy? I'd much prefer something like this to a lookup in a big list.

What I mean by ""nicer"" is that, instead of random combinations of characters (which it is, to be fair), things that sound like actual names. They can be from any culture (or sound like they are), real or imaginary, anything at all, so long as your stereotypical dumb, monolingual American like me can say 'em without too much trouble.

If you need extra clarification, go ahead and ask. I'm not really sure what to put here.

Addendum: So far as I can tell, there really aren't tags that fit this question all that well. If anyone who's been here longer can recommend some, that'd be awesome.

","57976","","57976","","2015-01-08 02:38:56","2015-01-08 07:27:48","Name generation","","3","7","5","","","CC BY-SA 3.0" "91901","1","92001","","2015-01-08 12:50:06","","3","914","

I'm in the very early phases of developing a browser based MMOG, kinda like this game but not as cartoonish and with more features. I'm an experienced web developer, yet not once have I ever used websockets.

Are websockets required for something like this?

","54355","","19126","","2015-01-08 16:14:28","2015-11-03 13:31:21","Are sockets required when developing a browser based MMOG?","","2","4","","","","CC BY-SA 3.0" "91907","1","91909","","2015-01-08 15:06:12","","1","460","

I'm trying to do a browser based game and I don't want to create my own game engine, no sense in recreating the wheel. Should I use Unity as my game engine? Or should I focus on a more Javascript based engine like Impact since it will be a browser game?

","54355","","","user1430","2017-05-31 16:21:43","2017-05-31 16:21:43","Unity engine vs JS engine for browser game","","1","0","1","2015-01-08 15:42:52","","CC BY-SA 3.0" "91911","1","92023","","2015-01-08 16:09:15","","1","279","

Wich is a reasonable GL version that have support for vertex textures fetchs(VTF) in OpenGL ? (for example GL 3.0, 3.1, 3.3) What textures formats should I expect to be supported in average video cards when doing such vertex fetch.

Is that possible with GL ES 2?

","27395","","27395","","2015-01-09 02:05:56","2015-01-10 19:14:22","Reasonable texture Formats for VertexTextureFetch in GL","","1","0","","","","CC BY-SA 3.0" "91912","1","92247","","2015-01-08 16:16:59","","1","1075","

I'm stuck with geometry shaders in OpenGL - c++ programming. I want to create simple cube by repeating 6 times drawing one rotated wall. Here is my vertex shader (everyting has #version 330 core in preamble):

uniform mat4 MVP;
uniform mat4 ROT;
layout(location=0) in vec3 vertPos;
void main(){
    vec4 pos=(MVP*ROT*vec4(vertPos,1.5));
    gl_Position=pos;
}

Now geometry shader:

layout (triangles) in;
layout (triangle_strip, max_vertices = 6) out;
out vec4 pos;
void main(void)
{
    for (int i = 0; i < 3; i++)
    {
        vec4 offset=vec4(i/2.,0,0,0);
        gl_Position = gl_in[i].gl_Position+offset;
        EmitVertex();
    }
    EndPrimitive();
}

And now fragment shader:

uniform mat4 MVP;
in vec4 pos;
out vec3 color;
void main(){
    vec3 light=(MVP*vec4(0,0,0,1)).xyz;
    vec3 dd=pos.xyz-light;
    float cosTheta=length(dd)*length(dd);
    color=vec3(1,0,0);
}

Well, there is some junk, I wanted also put shading into my cube, but I've got a problem with sending coordinates. The main problem is - here I get my scaled square (by MVP matrix), I can even rotate it with basic interface (ROT matrix), but when I uncomment my ""+offset"" line I get some mess. What should I do to make clean 6-times repeating?

","54253","","54253","","2015-01-09 11:37:02","2015-01-18 19:22:11","C++, OpenGL: Building a polyhedron via geometry shader","","2","1","","","","CC BY-SA 3.0" "91919","1","91922","","2015-01-08 18:26:22","","2","2007","

In libGDX there is a simple fade Interpolation that speeds up towards the end of the action animation. But what I am looking for is the reversed of it, it needs to start fast and slow down near the end. Is there a Interpolation function for that or how can I create this effect myself?

Currently i'm directly just initiating the action from the actor.

actor.addAction(Actions.moveTo(currentTouch.x - actor.getWidth() / 2,
 currentTouch.y - actor.getHeight() / 2,
 .2f,
 Interpolation.fade));
","20075","","20075","","2015-01-08 21:25:23","2015-01-08 21:46:37","Libgdx Actions interpolation Fadeout","","1","0","","","","CC BY-SA 3.0" "91923","1","91936","","2015-01-08 19:29:02","","4","455","

When I create a ""Custom Font"" in Unity 4.6.1, I get this:

""Ascii Start Offset"" indicates which character is first in the font. I've set it to 48 (the zero character) by hand, and my custom font works fine just for numbers.

But I'm creating these fonts at runtime as a build step. I access ""Character Rects"" through Font.characterInfo in code:

font.characterInfo = infoList.ToArray();

So the names don't necessarily align. But I can't figure out how to change ""Ascii Start Offset"" at runtime; none of the available members of Font class seem to be related. Is this possible?

","12670","","","user1430","2015-01-09 16:45:34","2015-01-09 19:40:03","How do I access the ""Ascii Start Offset"" property of a Font at runtime?","","1","0","1","","","CC BY-SA 3.0" "91935","1","91940","","2015-01-08 23:32:02","","1","930","

Some applications allow you to generate a human mesh by simply adjusting parameters. The results are broad and convincing: you can get from a thin asian girl to a muscular african man by just adjusting those. MakeHuman, for example, exposes the following UI:

What is the technique and what are the formulas used to implement that kind of procedural humanoid generation? Is there any published study/resource with the required information?

","13127","","13127","","2015-01-09 00:31:15","2015-01-09 02:21:37","How can I implement procedural humanoid generation like MakeHuman?","<3d-meshes>","1","5","","","","CC BY-SA 3.0" "91937","1","91938","","2015-01-09 00:25:11","","0","606","

I am writing a 2D real-time RPG in C# and I am trying to implement client-server communication using protocol buffers. I am trying to figure out how to implement delta compression to reduce message sizes.

I have read the protobuf ""optional"" fields take up no extra space on the wire when they are not set (obviously they still do in local memory though), so if I can just programmatically determine my deltas efficiently I would be in better shape.

Here's the problem. I can think of a few ways, but none seem to be ideal. I was wondering if someone could point me in the right direction.

  1. Try to keep a dirty bit array for EACH message-able class that gets cleared when a message is sent and marked when a variable changes. Then to send a message you just send the members that match the Boolean fields. This has LOW maintainability, but probably decent performance.

  2. Every message-able class keeps a ""pastMessage"" member that gets updated saved when a message is sent. Then just serialize your current state and diff the two messages. This would probably have LOW performance and almost DOUBLE memory overhead for the game types..

  3. Create a new message every time a message was sent and ""fill-in-as-you-go."" This would provide a better performance than the above, but still would essentially double my memory overhead on my server.

Any other ideas?

","49062","","40264","","2016-02-17 23:10:47","2016-02-17 23:10:47","Implementing Client-Server Delta Compression (with Protobufs)","","1","0","","","","CC BY-SA 3.0" "91939","1","92113","","2015-01-09 01:44:22","","2","101","

I'm just starting to learn the fundamentals of OpenGL via LWJGL. Every OpenGL function is implemented as a method on a GLxx class. The xx corresponds to the version of the spec when that function was introduced, such as GL20 for functions added in OpenGL 2.0. So far, so good.

The difficulty comes when following tutorials or looking at code that is written against the C API. I'm finding myself having to either guess or Google the version for every single function that I want to use. This is quite time consuming.

Is there a quick way of finding out which version of OpenGL any given feature was introduced? (Or any other way of figuring out the right LWJGL class for a function?).

","2246","","","user1430","2015-01-12 16:59:57","2015-01-12 20:19:56","How do find the right GLxx object for a given function in LWJGL?","","2","0","","","","CC BY-SA 3.0" "91942","1","91944","","2015-01-09 04:14:32","","0","1889","

I would like some know how can someone make an animation like this, in Monogame specifically.Since i am completely clueless on how to go around this i how should i go about this ? I hate asking questions this general, but since i don't know how to do that, adding any more to the question wouldn't really help. Even the general concept behind this is highly appreciated.

","55342","","","","","2015-01-12 15:32:42","C# xna/monogame ghost trail effect","<2d>","2","0","2","","","CC BY-SA 3.0" "91945","1","91947","","2015-01-09 07:17:17","","3","573","

I am developing a game similar to ""Street Fighter"" and two players can fight each other via Internet.

The networking model is ""lockstep"" by trying to sync user controller status for each frame.

When a game starts, the basic sequences are:

  1. Random matching
  2. Game Start
  3. Player A sends out a message to B every 30ms, and vice versa.

The question is, in step 2, ""Game Start"", these two players must start at the same time ""physically"".

I am thinking to negotiate a timestamp (based on UTC) for two players to start the game right upon it. However, I am suspecting the timestamp might be device dependent. Maybe it's possible for a device to report a timestamp that is 10 seconds or 100 seconds behind the other player.

Therefore, I am wondering what's the best solution for this kind of situation when implementing a networking game like this (no server involved)?

","57460","","","","","2015-01-09 08:00:37","how to sync two players to start at the same time for a head-to-head networking game?","","1","0","1","","","CC BY-SA 3.0" "91948","1","91971","","2015-01-09 08:11:12","","2","348","

When I first started trying to set up my air units' movement in my RTS game, I thought it was going to be easy with simple Euler method linear acceleration. This was far from the case as the ship would have to slow to a halt on exactly the target destination, as well as properly compensate after a sharp turn.

How would I decelerate my unit exactly onto the target destination and properly compensate for changed destinations while still factoring in the current velocity in a 2D simulation (Positions and velocity stored in 2D vectors)?

StarCraft 2 does this well, and I'd like to arrive at a similar effect. I want something like shown here. Acceleration can be seen at 0:20. Turning and maintaining momentum can be seen at 1:25. Deceleration can be seen at 1:55. (Find these times in recent comments of the video).

Note: I'm using a lockstep model and frame rate can be either 5 or 10 per second depending on players' computer abilities.

","57906","","","user1430","2015-01-09 17:06:39","2015-04-26 20:41:01","How can I manage the deceleration of units so that they arrive on-target?","","1","8","5","","","CC BY-SA 3.0" "91953","1","91966","","2015-01-09 10:12:00","","0","130","

How can I change the animation view in Unity3D? I have only some ""dot"", but I want to view some curve! How do I change this?

What I have:

And what I want:

(image from the documentation of Unity3D)

","57255","","24755","","2015-01-09 15:31:05","2015-01-09 15:36:04","How to change animation view in Unity3D?","","1","0","","","","CC BY-SA 3.0" "91962","1","91963","","2015-01-09 15:11:45","","0","78","

I'm working through a game in Slick2D and am now focusing on a lot of the graphical end. It's going pretty well other than a couple of issues with alignment. An image illustrating both of these issues here: Firstly, the background. I have 2 Entity objects containing coordinates, velocity, and an image for the background. Both contain the same image (1600x600). For the first, x=0 and for the second, x=1600. They both move to the left at a speed of 1 pixel per frame. No matter if I make the offset smaller (like 1580), I am still left with a gap showing the background colour. I've put the two images together in photoshop and they blend together, so it's not an issue with the image. Here is some relevant code:

//in init block
Entity bg1 = new Entity(""background"",1600,600);
        bg1.setXVelocity(BACKGROUND_VELOCITY);
        Entity bg2 = new Entity(""background"",1600,600); //a second one is needed for endless background
        bg2.setXVelocity(BACKGROUND_VELOCITY);
        bg2.setX(1600);
        backgroundList = new ArrayList<Entity>();
        backgroundList.add(bg1);
        backgroundList.add(bg2);
...
//in update loop, running every frame
for (Entity bg : backgroundList){
    bg.update();
    if (bg.getX() < -1*bg.getSpriteWidth()){
        bg.setX(1600);
    }
}

In other words, what I expect my code to do is have both images move left one pixel each until the left image reaches -1600 (-1*bg.getSpriteWidth()), at which point the right image will be at 0, and with a width of 1600 the first image would be moved to 1600. This is, at least, how I expect my code to work. bg.update() calls a method which adjusts the position based on the velocity given, so it will move x backward by 1 each time for both backgrounds.

Secondly, the misaligned walls. As you can see in the image, the walls on the right are perfectly aligned. They move left together at the same velocity in the same way as above. For some reason, however, as the walls pass the mid-way mark (where the fish is), the upper wall's velocity slows for a slight second which misaligns it with the lower wall (though the velocity returns after and they stay that constant distance apart). I actually have found the code causing this (when commented out it stops), but I cannot see why it would do such a thing. The values still remain correct, but the graphical output is distorted.

for (int i = 0; i < wallList.size(); i++) {
    Wall wall = wallList.get(i);
    wall.update();
    if (wall.getX() < -1 * wall.getImageWidth()) {
        wallList.remove(i);
    }                       
}
...
if (wallList.get(wallList.size() - 1).getX() < WIDTH - DISTANCE_BETWEEN_WALLS) {
    createWalls();
}

...
private void createWalls() {
    Wall w1 = new Wall();
    Wall w2 = new Wall();
    int yPos = rand.nextInt(400) + 50 - w1.getImageHeight();
    w1.setLocation(WIDTH, yPos);
    w1.setXVelocity(WALL_VELOCITY);
    w1.setVisible(true);
    w2.setLocation(WIDTH, yPos + 11 * player.getImageHeight() / 3 + w2.getImageHeight());
    w2.setXVelocity(WALL_VELOCITY);
    w2.setVisible(true);
    wallList.add(w1);
    wallList.add(w2);
}

It's the wallList.remove(i); line that is causing this offset to happen. When I remove the line, the walls stay aligned as intended. Any ideas? Are these just imperfections of Slick2D? Perhaps there is a better way for me to do these things?

","59093","","","","","2015-01-09 15:22:41","Slick2D Graphics Misaligned","","1","0","","","","CC BY-SA 3.0" "79039","1","79074","","2014-06-21 03:13:04","","0","108","

I'm developing a scenario to shoot incoming targets by a specific gunner unit, however I'm confused to actual logic of shooting an object with a gun. I'm working in 3D environment. I'v target's position vector i.e [x,y,z] and its rotation vector [x,y,z,angle], and also Gun's position vector i.e [x,y,z] and its rotation vector [x,y,z,angle]. The gun has two components, i.e a TopDondur and a set of Barrels, both components have the rotation along y-axis, i.e only angel of rotation vector [0 1 0 angle] is required to move the Gunner Unit to the target.

What rotataions shuould give the Gunner unit to rotate both TopDondur and Barrels, i.e TopDondur rotates to the target and Barrels are to move up/down with respect to that target.

","45510","","5404","","2015-05-11 14:05:54","2015-05-11 14:05:54","3d rotation of a Gunner unit to shoot the target","","1","0","","","","CC BY-SA 3.0" "79047","1","79633","","2014-06-21 09:26:13","","1","4068","

I use script below to take a screenshot from camera. It's working fine. However, when I take a screenshot again (pressing k multiple times), its old image is not clear from the memory and it keep drawing over same image over and over again. If object is moving while taking screenshot multiple times, image I will get all frame combine into one image with every frame in the same image. What should I do to fix this problem?

using UnityEngine;
using System.Collections;

public class HiResScreenShots : MonoBehaviour {
    public int resWidth = 2550; 
    public int resHeight = 3300;

    private bool takeHiResShot = true;

    public static string ScreenShotName(int width, int height) {
        return string.Format(""{0}/screenshots/screen_{1}x{2}_{3}.png"", 
                             Application.dataPath, 
                             width, height, 
                             System.DateTime.Now.ToString(""yyyy-MM-dd_HH-mm-ss""));
    }

    public void TakeHiResShot() {
        takeHiResShot = true;
    }

    void LateUpdate() {
        takeHiResShot |= Input.GetKeyDown(""k"");
        if (takeHiResShot) {
            RenderTexture rt = new RenderTexture(resWidth, resHeight, 24);
            camera.targetTexture = rt;
            Texture2D screenShot = new Texture2D(resWidth, resHeight, TextureFormat.ARGB32, false);
            camera.Render();
            RenderTexture.active = rt;
            screenShot.ReadPixels(new Rect(0, 0, resWidth, resHeight), 0, 0);
            camera.targetTexture = null;
            RenderTexture.active = null; // JC: added to avoid errors
            Destroy(rt);
            byte[] bytes = screenShot.EncodeToPNG();
            string filename = ScreenShotName(resWidth, resHeight);
            System.IO.File.WriteAllBytes(filename, bytes);
            Debug.Log(string.Format(""Took screenshot to: {0}"", filename));
            takeHiResShot = false;
            Debug.Log(""Capture!!"");
        }
    }
}
","43924","","","","","2014-07-02 08:35:56","Take a screenshot from camera problem","","1","0","","","","CC BY-SA 3.0" "79049","1","79050","","2014-06-21 09:51:51","","28","34403","

I am programming a tile based game and I have some basic tiles (grass, dirt, etc..), but I can't figure out how to make good random map generation, because when I do some really random selection, if the tile should be grass/dirt, I get this:

I understand why this is happening, but what I want is to create some random continuous areas of grass or dirt. Something that would make more sense, like this:

","45348","","46764","","2014-06-22 10:17:25","2015-01-15 17:12:52","Generating tile map","","4","1","28","","","CC BY-SA 3.0" "79055","1","79100","","2014-06-21 11:59:46","","1","281","

I have made flash game and I want to show a leader board in that game. I have uploaded my game to kongregate.com.

I have made a scoreboard there with the name of overLoadScore (overload is the name of my game). i dragged an instance of API Loader component.

in my Main.as file I have the code.

var scoreBoard:ScoreBoard = new SoreBoard();
scoreBrowser.scoreBoardName = ""overLoadScore"";
scoreBrowser.loadScores();
scoreBoard.period = ScoreBoard.ALL_TIME;

i am getting the error call to a possibly undefined method ALL_TIME with reference to static type class ScoreBoard.

although i have provided correct API ID and encryption key.But in the output window i am getting the message. [Newgrounds API] :: No API ID entered in the API Connector component.

and i have created the scoreBoard which i mentioned earlier but what is this

[Newgrounds API] :: No scoreboards created for this movie.

","45316","","45316","","2014-06-22 08:18:26","2014-06-22 08:23:27","integrating leaderboard from Newgrounds in flash game","","1","5","","2014-06-22 20:22:31","","CC BY-SA 3.0" "79057","1","79064","","2014-06-21 12:28:31","","14","14073","

I am planning to create an indie game using Java and Eclipse IDE and I want to put the finished product to Steam Greenlight.

How does the whole process work after the game is finished and running only on Eclipse?

","33992","","","user1430","2014-10-17 19:45:17","2016-12-27 15:46:53","How To: Java Game to Steam Greenlight","","2","1","7","","","CC BY-SA 3.0" "79060","1","79070","","2014-06-21 13:32:21","","0","2516","

I quite understand how projective texturing works. I implemented successfully a shader for that following nvidia doc.

The major problem I'm facing is that with that implementation the projector frustum is used only to determine the texture coordinate in projective space, but it doesn't clip anything outside the projective frustum volume.

In other words, if I have a projector pointing toward a object, the texture will be projected on it even if the object is outside its frustum. In addition when projector is almost parallell to the projected object, the texture stretching is too evident and I would like to fade it out.

Now, I'm trying to understand how Unity handles their built-in projectors. I found this example. Here's vertex and fragment relevant code:

v2f vert (float4 vertex : POSITION)
{
  v2f o;
  o.pos = mul (UNITY_MATRIX_MVP, vertex);
  o.uvShadow = mul (_Projector, vertex);
  o.uvFalloff = mul (_ProjectorClip, vertex);
  return o;
}

fixed4 frag (v2f i) : SV_Target
{
  fixed4 texS = tex2Dproj (_ShadowTex, UNITY_PROJ_COORD(i.uvShadow));
  texS.rgb *= _Color.rgb;
  texS.a = 1.0-texS.a;

  fixed4 texF = tex2Dproj (_FalloffTex, UNITY_PROJ_COORD(i.uvFalloff));
  fixed4 res = texS * texF.a;
  return res;
}

I think that _Projector matrix is a classical matrix to transform vertices coordinates into projector space. (In fact the fragment just use the transformed coordinates to sample the projected texture).

What I'm really missing is how Unity construct _ProjectionClip matrix. The transformed vertex coordinates are used to sample the falloff texture (that I think it is what I really need).

Does anyone know how the _ProjectionClip is constructed? Or how to achieve a similar effect?

Note: I did something similar for calculating spotlights falloff, but there I used a single matrix multiplication in the vertex shader to transform vertices into lightspace, and the squared distance in the fragment shader to calculate texture coords for a lookup into the attenuation texture.

","10684","","","","","2014-06-21 17:55:09","Projective texturing and falloff","","1","0","","","","CC BY-SA 3.0" "79075","1","79098","","2014-06-21 19:55:36","","1","3786","

In my game, I have:

GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texID);

Where texID is an integer returned by my setTexture() method. Let's say, in this instance it's 1.

When I bind my textures during rendering calls, I don't want to bind this texture every single call. Because I use different atlases of various textures and it's pretty wasteful to keep re-binding a texture when it's not required.

So, I would like to do something like this (Pseudo code)

if (texID != *currentTexture*){
    GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texID);
}

Therefore, if the texture I want to use is the same as the one already bound, the call to re-bind it will be ignored.

I can't work out how to get the ID of the texture that is currently bound.

I've tried

GLES20.glActiveTexture();

But this doesn't return the textureID, it returns the texture Units

Help appreciated

Edit

I've tried the suggestion below but I'm getting the folowing errors:

When hovering over the underline, this is the error:

I had searched this site and the wider web for a couple of hours before posting here but couldn't find a proper usage example, just this out of context snippet.

","22241","","22241","","2014-06-21 23:35:14","2014-06-22 08:04:25","How to obtain the currently bound texture ID in an openGL ES 2.0 project","","2","1","","","","CC BY-SA 3.0" "79076","1","79088","","2014-06-21 20:04:53","","1","2584","

I'm back with another question that may be really simple.

I've a texture drawn on my spritebatch and I'm making it move up or down (y-axis only) with Libgdx's Input Handler: touchDown and touchUp.

@Override
public boolean touchDown(int screenX, int screenY, int pointer, int button) {
    myWhale.touchDownY = screenY;
    myWhale.isTouched = true;
    return true;
}

@Override
public boolean touchUp(int screenX, int screenY, int pointer, int button) {
    myWhale.isTouched = false;
    return false;
}

myWhale is an object from Whale Class where I move my texture position:

public void update(float delta) {
    this.delta = delta;
    if(isTouched){
        dragWhale();
    }
}

public void dragWhale() {
    if(Gdx.input.getY(0) - touchDownY < 0){
        if(Gdx.input.getY(0)<position.y+height/2){
            position.y = position.y - velocidad*delta;
        }
    }
    else{
        if(Gdx.input.getY(0)>position.y+height/2){
            position.y = position.y + velocidad*delta;
        }
    }
}

So the object moves to the center of the position where the person is pressing his/her finger and most of the time it works fine but the object seems to take about half a second to move up or down and sometimes when I press my finger it wont move.

Maybe there's another simplier way to do this. I'd highly appreciate if someone points me on the right direction.

","48352","","1929","","2014-07-10 18:16:11","2016-05-15 19:36:08","How to make an Actor follow my finger","","3","2","1","","","CC BY-SA 3.0" "79077","1","79081","","2014-06-21 20:20:19","","-3","91","

Many games do just fine with two projections, that can be represented by a matrix (orthographic and linear perspective). But what about projections that can't be represented by a matrix? Can you please provide some examples of such projections and why they might be used in a game application?

","16929","","16929","","2014-06-21 23:31:02","2014-06-21 23:31:02","projection and matrices","","1","7","","2014-06-21 22:17:17","","CC BY-SA 3.0" "79079","1","79130","","2014-06-21 21:08:02","","0","858","

I've been looking over my code and I'm just wondering, when I set a texture for say, 20 quads that need to use the same texture, it seems as though I'm creating a new texture each time...... surely this isn't efficient?

How can I make sure that 20 quads that use the same texture do just that, use only 1 rather than creating 20 copies of it?

I just can't work out how I can do this. I know I can change the texID after applying the texture so that all quads that use the same texture, will have the same Texture ID, but this isn't really the issue I'm facing.

Of course, I may be completely misunderstanding how OpenGL deals with textures and my code may be OK :-/

Code:

    public void setTexture(GLSurfaceView view, Bitmap imgTexture){
        this.imgTexture=imgTexture;                 
        //Create program from Utils class
        iProgId = Utils.LoadProgram(strVShader, strFShader);
        //Return location of u_basemap
        iBaseMap = GLES20.glGetUniformLocation(iProgId, ""u_baseMap"");
            //Return location of attribute variables
        iPosition = GLES20.glGetAttribLocation(iProgId, ""a_position"");
        iTexCoords = GLES20.glGetAttribLocation(iProgId, ""a_texCoords"");
        //Return usable texture ID from Utils class
                texID = Utils.LoadTexture(view, imgTexture);

}

And....

public static int LoadTexture(GLSurfaceView view, Bitmap imgTex){

    //Array for texture
    int textures[] = new int[1];
    try {
        //Return texture name in textures Array
        GLES20.glGenTextures(1, textures, 0);
        //Bind textures
        GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textures[0]);
        //Set parameters
        GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
        GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);

        //Apply the texture to the image loaded
        GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, imgTex, 0);

        //clamp the texture
        GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T,GLES20.GL_CLAMP_TO_EDGE);        
    } catch (Exception e){          
    }
    //Increase texture count by one
    textureCount++;
    return textures[0];
}

Edit

I've attempted the suggestion below (just calling glBindTextures()) however, I'm not having any luck. Clearly I'm not understanding this correctly.

Lets say I have 2 sprites, Object1 and Object2.

I call the following:

object1.setTexture(this, myAtlasTexture); //Calls setTexture (See above)

This works and sets the texture on this sprite, so I can now draw it.

Now I want object2 to use the same texture. So I don't want to call this again and create a new texture (I believe this is a bad idea) - I simply want to use the texture that was previously created when I called setTexure on object1.

So, instead of calling setTexture on object 2, I do the following:

GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureIDOfObject1);

(Note textureIDOfObject1 is just that, I just query object1 to obtain its texture ID).

","22241","","33287","","2018-08-29 13:44:16","2018-08-29 13:47:09","Re-using one texture for multiple OpenGL Quads","","1","9","","","","CC BY-SA 4.0" "79085","1","79086","","2014-06-21 23:33:19","","0","740","

i'm rendering an attack animation with the following code:

currentFrame = attackAnimation[3].getKeyFrame(stateTime, false); //3 : left, 2: right
batch.draw(currentFrame, getX(), getY());

This renders the CurrentFrame on the x and y position of the character. This works fine for the right animation as you can see here:

but when i apply the same code for left for example it doesn't draw the character on the correct place and like ""pushes himself away"" from the bottom left corner and leaving the bounding box.

What am I doing wrong or what am I not seeing?

setGameObject(new Sprite(txtIdle[2]));
setSize(getWidth(), getHeight());
setOrigin(getWidth() /2.0f, getHeight() /2.0f);
setPosition(500, 500);

this code sets an idle image and the size of the bounding box (and the sprite)

","48432","","48432","","2014-06-21 23:42:11","2014-06-22 00:34:10","Render character in bounding box","","2","0","","","","CC BY-SA 3.0" "79089","1","79102","","2014-06-22 01:24:03","","1","1056","

I have two questions.

I'm working in a rogue-like and I've manage to implement a camera on the player, the camera shows just the players surroundings and is fixed in the side of the windows.

The problem is, when the player is close to the sides of the map there's a black space in the surface. Like so:

1) How do I make the camera 'snap' to the side and don't go any further?

To draw the map

  • I took the Camera Rect positions [topleft and bottomright];
  • Converted it to World Position;
  • Iterate over it, with a enumerator too;
  • Did any lit/visited FOG calcunations with X and Y;
  • And Blited in the screen using the enumerators 'i' and 'j'.

Here's the code:

topleft = Map.toWorld(camera.rect.topleft)
bottomright = Map.toWorld(camera.rect.bottomright)
for i, x in enumerate(xrange(topleft[0], bottomright[0])):
    for j, y in enumerate(xrange(topleft[1], bottomright[1])):
        tile = mymap.tileAt(x, y)
        object = [obj for obj in Object.OBJECTS if obj.pos == (x,y)]
        if tile:
            lit = field_of_view.lit(x, y)
            visited = field_of_view.visited(x, y)
            graphic = tile.graphic
            if lit:
                color = tile.color
            elif visited:
                color = GRAY
            else:
                color = BLACK
            renderedgraphic = myfont.render(ch, 1, graphic)
            screen.blit(renderedgraphic, Map.toScreen((i + 1, j)))
        if object:
            Draw.drawObject(object[0], Map.toScreen((i + 1, j)))

I saw HERE a exemple of this but I couldn't adapt the code to my game because it uses sprites.

I also tried a different approach using just a camera.center position:

in short:

class Object():
    def update(self):
        self.relX = self.x - camera.x
        self.relY = self.y - camera.y

class Camera(Object):
    def update(self):
        self.x = self.relX = PLAYER.x
        self.y = self.relY = PLAYER.y

def MapDraw()
    for x in xrange(mymap.width):
        for y in xrange(mymap.height):
            ... # do fov stuff        
            tile = Tile.At(x, y)  # get tile instance
            if tile:
                tile.update() # update it's relative position
                screen.blit(renderedgraphic, (tile.relX * TILESIZE, tile.relX * TILESIZE)) # blit the postion to it's relative position

This is what happens:

which leads to the next question:

2) Is there a better way to create a camera than using this 'hack' of enumerating?

EDIT: Answer

So after some struggles I tried the old, print whatever I had and found out that I could use the camera.x and camera.y for this.

first I check the distace between camera.x and the optional position in the sides, It was 3 for up and down, 7 to sides.

Then a couple of if-statements fixed this. But since int's aren't very consitent depending on mapsize, cameraview, I found that the camera.rect width and height divided by 2 in world position was the same as (7,3) I was looking for.

Anyway, here's the code for the camera.

def update(self):
    self.x, self.y = PLAYER.pos # center camera on player

    self.size = Map.toWorld((camera.rect.width, camera.rect.height)) ## get the ScreenSize of the rect and convert to World Position
    self.size[0] /= 2 # divided each one by two, because the player is in the center.
    self.size[1] /= 2

    if self.x < self.size[0]:  # right
        self.x = self.size[0]
    if self.y < self.size[1]: # up
        self.y = self.size[1]
    if self.x + self.size[0] > mymap.width: # left
        self.x = mymap.width- self.size[0]
    if self.y + self.size[1] > mymap.height: # down
        self.y = mymap.height-self.size[1]

    self.rect = pygame.Rect((self.x * TILESIZE) - CAMERA_WIDTH / 2,
                            (self.y * TILESIZE) - CAMERA_HEIGHT / 2, CAMERA_WIDTH, CAMERA_HEIGHT) # updated the rect

Thanks Tyyppi_77, even tho you provided a small answer, the way you phrased it was just right for me to get it.

","48435","","-1","","2017-05-23 12:37:39","2014-06-22 11:14:20","Pygame Scrolling Map","","1","0","1","","","CC BY-SA 3.0" "79095","1","79299","","2014-06-22 07:16:36","","8","1858","

First of all, note that I want to understand the principle, so I would prefer explanations in plain english (but of course I have nothing against some code to complement these explanations).

I am wondering what is the way to make a character step correctly on a slope, let's start with two screenshots of the game Limbo to show you what I mean :
So, as you can see on these pictures, it looks like the spine of the character is actually perpendicular to the horizon and not to the surface of the ground and this observation leads me to my question:
How can you reproduce such a behavior with Sprite Kit ? Because if I attach a physic body to a node, I have the possibility to let it rotate, so I would get something like this :

If I disable the rotation I would get something more like this :
As you can see these ways to do both present important issues and doesn't looks natural at all. So how should I do to make it look natural ? I thought to create several sprites and several physics bodies and then attaching them all together with joints I don't think that it will look natural because if you look at this :
Although I can physically link these nodes, how will I do to make them look real graphically ? Because in the reality, for example, the thigh and calf are linked by a knee...

Any ideas ?

","48440","","","user1430","2014-06-25 15:01:21","2014-06-25 16:46:53","How can I make a character stand on slopes?","","2","7","6","","","CC BY-SA 3.0" "79107","1","79111","","2014-06-22 11:16:54","","1","659","

Optimizing modern OpenGL relies on aggressive batching, which is done by calls like glMultiDrawElementsIndirect. Although glMultiDrawElementsIndirect can render a large number of different meshes, it makes the assumption that all these meshes are made of the same primitives (eg. GL_TRIANGLES, GL_TRIANGLE_STRIP, GL_POINTS).

In order to most efficiently batch rendering, is it wise to force everything to be GL_TRIANGLES (while ignoring possible optimizations with GL_TRIANGLE_STRIP or GL_TRIANGLE_FAN) in order to make it possible to group more meshes together?

This though comes from reading the Approaching Zero Driver Overhead slides, which suggests to draw everything (or, presumably, as much as possible) in a single glMultiDrawElementsIndirect call.

","48443","","48443","","2014-06-22 20:43:50","2014-06-22 20:43:50","Should all primitives be GL_TRIANGLES in order to create large, unified batches?","<3d>","1","2","1","","","CC BY-SA 3.0" "79108","1","79127","","2014-06-22 11:31:08","","3","11457","

I'm working on a tile based platformer game in libgdx. I'm having trouble getting the actual touch input coordinates when the aspect ratio of device is different from the virtual fixed aspect ratio that I'm working on. I'm using a virtual resolution of 480x800. All the rendering work and camera work is being done in GameRenderer class and the input is being handled in InputHandler class. I've tried implementing the camera.unproject() method but it won't do any good. I've added the unproject method in GameRenderer class since my camera is defined here. Then I've sent the screen touch coords from the InputHandler class to the GameRenderer class and returned the converted coords back to InputHandler.

I'm posting the relevant code from both classes.

GameRenderer:

public class GameRenderer 
{
    public static OrthographicCamera cam;
    private ShapeRenderer shapeRenderer;
    private SpriteBatch batcher;
    private static Player player=GameWorld.getPlayer();

    private static final int VIRTUAL_WIDTH = 800;
    private static final int VIRTUAL_HEIGHT = 480;
    private static final float ASPECT_RATIO = (float)VIRTUAL_WIDTH/(float)VIRTUAL_HEIGHT;
    private Rectangle viewport;
    public static Vector2 crop = new Vector2(0f, 0f); 
    public static float scale = 1f;
    public static int Case=0;
    public static float width;
    public static float height;
    public static float w;
    public static float h;

    public GameRenderer(GameWorld world) 
    {
        cam = new OrthographicCamera();
        cam.setToOrtho(true, 800, 480);
        batcher=new SpriteBatch();
        batcher.setProjectionMatrix(cam.combined);
        shapeRenderer = new ShapeRenderer();
        shapeRenderer.setProjectionMatrix(cam.combined);
    }


    public void render()
    {
        cam.update();
        Gdx.gl.glClearColor(0, 0, 0, 1);
        Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);

        height=Gdx.graphics.getHeight();
        width=Gdx.graphics.getWidth();

        float aspectRatio = (float)width/(float)height;


        if(aspectRatio > ASPECT_RATIO)
        {
            scale = (float)height/(float)VIRTUAL_HEIGHT;
            crop.x = (width - VIRTUAL_WIDTH*scale)/2f;
            Case=1;
        }
        else if(aspectRatio < ASPECT_RATIO)
        {
            scale = (float)width/(float)VIRTUAL_WIDTH;
            crop.y = (float)(height - VIRTUAL_HEIGHT*scale)/2f;
            Case=2;
        }
        else
        {
            scale = (float)width/(float)VIRTUAL_WIDTH;
        }

        w = (float)VIRTUAL_WIDTH*scale;
        h = (float)VIRTUAL_HEIGHT*scale;


        viewport = new Rectangle(crop.x, crop.y, w, h);

        Gdx.gl.glViewport((int) viewport.x, (int) viewport.y, (int) viewport.width, (int) viewport.height);

        switch(GameWorld.state)
        {
            case Running: renderRunning(); break;
            case GameOver: renderGameOver(); break;
            case Paused: renderPaused(); break;
            default: break;
        }

    }

    public static Vector3 unprojectCoords(Vector3 coords)
    {
        cam.unproject(coords);
        return coords;
    }

}

InputHandler:

public class InputHandler implements InputProcessor 
{

    @Override
    public boolean touchDown(int screenX, int screenY, int pointer, int button) 
    {

        Vector3 coords=new Vector3(screenX,screenY,0);
        Vector3 coords2=GameRenderer.unprojectCoords(coords);

        screenX=(int) coords2.x;
        screenY=(int) coords2.y;

        switch(GameWorld.state)
        {
            case Running:
            {
                if(GameRenderer.jumpButton.isTouchDown((int)screenX, (int)screenY))
                {
                    if(player.isJumped() == false)
                    {
                        player.jump();
                        if(!GameWorld.soundMuted) AssetLoader.jump.play(AssetLoader.SOUND_VOL);
                    }

                }




        return false;
    }

    @Override
    public boolean keyDown(int keycode) 
    {
        switch(GameWorld.state)
        {
            case Running:
            {
                if(keycode==Keys.SPACE)
                {
                    if(player.isJumped() == false)
                    {
                        player.jump();
                        if(!GameWorld.soundMuted) AssetLoader.jump.play(AssetLoader.SOUND_VOL);
                    }
                }

                if(keycode==Keys.LEFT)
                {
                    leftDown=true;
                }

                if(keycode==Keys.RIGHT)
                {
                    rightDown=true;
                }

                if(keycode==Keys.CONTROL_RIGHT)
                {
                    player.shoot();
                }
                break;
            }

            default: break;
        }

        return false;
    }



}
","48445","","","","","2014-06-22 18:10:57","Convert Screen coords to World Coords LIBGDX","","1","0","","","","CC BY-SA 3.0" "79123","1","79124","","2014-06-22 15:30:26","","0","202","

I've been trying to implement Volumetric Lighting using the code from this tutorial but I've run into some issues, even after basically copypasting the shader code.

I'll just show you what's going wrong.

As you can see, the models seem to be ""projected"" in the space. At first I tried tinkering with the various variables that the shader requires, but to no avail. In the screenshot I used 0.5f for the decay, exposure, weight and density, with a sample amount of 64.

Any ideas on what might be causing this?

Draw method:

    public override void Draw(GameTime gameTime, SpriteBatch spriteBatch)
    {
        base.Draw(gameTime, spriteBatch);

    //Set the rendertarget to draw the Blinn-Phong shaded scene to
        GraphicsDevice.SetRenderTarget(renderTarget);
        GraphicsDevice.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, 1.0f, 0);

        Material material = new Material();
        material.DiffuseColor = Color.Red;
        material.AmbientColor = Color.Red;
        material.AmbientIntensity = 0.2f;
        material.SpecularColor = Color.White;
        material.SpecularIntensity = 2.0f;
        material.SpecularPower = 25.0f;

    //Draw all the models
        foreach (ModelData m in models)
        {
            ModelMesh mesh = m.Model.Meshes[0];
            Effect e = mesh.Effects[0];

            e.CurrentTechnique = e.Techniques[m.Technique];
            material.SetEffectParameters(e);
            this.camera.SetEffectParameters(e);
            e.Parameters[""World""].SetValue(world * m.Transform);
            e.Parameters[""WorldInverseTransposed""].SetValue(Matrix.Transpose(Matrix.Invert(world * m.Transform)));
            e.Parameters[""CameraEye""].SetValue(new Vector4(this.camera.Eye, 0));
            // TODO: LightSource Color + Intensity
            e.Parameters[""LightSources""].SetValue(lightPositions);

            mesh.Draw();
        }

    //Restore the rendertarget to the backbuffer and clear it.
        GraphicsDevice.SetRenderTarget(null);
        GraphicsDevice.Clear(Color.Black);

    //Pass the standard variable to the spriteBatch vertex shader.
    //The vertex shader isn't used (for as far as I can tell at least) but because the standard
    //vertex shader for spriteBatch is compiled in 2_0 and the God Ray shader in 3_0 I had to implement it
    //manually in order to compile it in 3_0.
        Matrix projection = Matrix.CreateOrthographicOffCenter(0,
        GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height, 0, 0, 1);
        Matrix halfPixelOffset = Matrix.CreateTranslation(-0.5f, -0.5f, 0);
        postProcessing.Parameters[""MatrixTransform""].SetValue(halfPixelOffset * projection);

    //Setup all the required data for the shader
        postProcessing.CurrentTechnique = postProcessing.Techniques[""Technique1""];
        Vector3 lightPosition = Vector3.Transform(lightPositions[0], world * camera.ViewMatrix * camera.ProjectionMatrix);
        postProcessing.Parameters[""lightPosition""].SetValue(new Vector2(lightPosition.X, lightPosition.Y));
    postProcessing.Parameters[""Tex""].SetValue(renderTarget);
        postProcessing.Parameters[""exposure""].SetValue(0.5f);
        postProcessing.Parameters[""decay""].SetValue(0.5f);
        postProcessing.Parameters[""weight""].SetValue(0.5f);
        postProcessing.Parameters[""density""].SetValue(0.5f);

    //Draw the renderTarget to the screen at the size of the viewport so it fits the screen
        spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.Additive, SamplerState.PointWrap, DepthStencilState.Default, RasterizerState.CullNone, postProcessing);
        spriteBatch.Draw(renderTarget, new Rectangle(spriteBatch.GraphicsDevice.Viewport.X, spriteBatch.GraphicsDevice.Viewport.Y, spriteBatch.GraphicsDevice.Viewport.Width, spriteBatch.GraphicsDevice.Viewport.Height), Color.White);
        spriteBatch.End();

    //Restore some of the changes spriteBatch.Begin() made to the graphics device so the 3D render won't break
        GraphicsDevice.BlendState = BlendState.Opaque;
        GraphicsDevice.DepthStencilState = DepthStencilState.Default;
        GraphicsDevice.SamplerStates[0] = SamplerState.LinearWrap;
    }

Shader:

sampler2D renderTarget;

#define SAMPLE_AMOUNT 64

Texture2D Tex;
float2 lightPosition;
float exposure;
float decay;
float weight;
float density;

SamplerState State = sampler_state
{
    Texture = <Tex>;
    MipFilter = Point;
    MinFilter = Linear;
    MagFilter = Linear;
    AddressU = Wrap;
    AddressV = Wrap;
};

float4x4 MatrixTransform;

//The default vertex shader of the spriteBatch, implemented manually so it can be compiled in 3_0
void SpriteVertexShader(inout float4 color    : COLOR0,
                        inout float2 texCoord : TEXCOORD0,
                        inout float4 position : SV_Position)
{
    position = mul(position, MatrixTransform);
}

//Copypasted from the tutorial, only changed some variable names
float4 main(float2 texCoord : TEXCOORD0) : COLOR0  
{  
  // Calculate vector from pixel to light source in screen space.  
   half2 deltaTexCoord = (texCoord - lightPosition.xy);  
  // Divide by number of samples and scale by control factor.  
  deltaTexCoord *= 1.0f / SAMPLE_AMOUNT * density;  
  // Store initial sample.  
   half3 color = tex2D(State, texCoord);  
  // Set up illumination decay factor.  
   half illuminationDecay = 1.0f;  
  // Evaluate summation from Equation 3 NUM_SAMPLES iterations.  
   for (int i = 0; i < SAMPLE_AMOUNT; i++)  
  {  
    // Step sample location along ray.  
    texCoord -= deltaTexCoord;  
    // Retrieve sample at new location.  
   half3 sample = tex2D(State, texCoord);  
    // Apply sample attenuation scale/decay factors.  
    sample *= illuminationDecay * weight;  
    // Accumulate combined color.  
    color += sample;
    // Update exponential decay factor.  
    illuminationDecay *= decay;  
  }  
  // Output final color with a further scale control factor.  
   return float4( color * exposure, 1);  
}  

technique Technique1
{
    pass Pass1
    {
    VertexShader = compile vs_3_0 SpriteVertexShader();
        PixelShader = compile ps_3_0 main();
    }
}
","48455","","33287","","2019-04-22 15:04:42","2019-04-22 15:04:42","Volumetric Lighting projects models","<3d>","1","0","","","","CC BY-SA 4.0" "79135","1","90088","","2014-06-22 20:17:39","","0","285","

I am using latest unity engine and I am having a scripting issue.

I create a prefab , added a few scripts to it and I am trying to instantiate a few of those objects like this :

        GameObject go = Instantiate(Prefab) as GameObject;
        var co = go.GetComponent<MyScriptA>();

Issue here is co is always null, this means that MyScriptA is no on the go instance. Accessing the prefab.GetComponent also returns null, but the prefab has the scripts in the editor and its assigned also in the editor to the Prefab variable (drag dropping). So I am not sure what might be wrong. For example if I Debug.Log(prefab) its not null.

So what am I doing wrong here?

","46819","","","","","2014-12-22 13:56:51","Unity3D : Prefab Instancing Issue","","1","8","","","","CC BY-SA 3.0" "79142","1","79160","","2014-06-23 01:50:15","","0","716","

Overview

My splash screen starts by display a 'loading' dialogue and then kicks off an asyncTask (The loading dialogue is derived from the standard Android View class)

Within the doInBackground method of the asyncTask, all of my quads / sprite objects are created and other values set etc.

When the code gets to onSurfaceCreated, it loads all of the textures and runs the game.

The texture loading happens in onSurfaceCreated because it has to be done on the GL Thread.

Now, on a nice fast tablet, it all works great and it's all very stable. However, when I run it on an old handset, it (intermittently) crashes all over the place with nullpointerexceptions.

Reason

What's happening is on the slower device, the asyncTask is running and (due to the asynchronous nature of it), continues to run in the background while the main code moves on to the onSurfaceCreate method of the GLRenderer, there it attempts to apply textures to objects that don't yet exist because the asyncTask is still doing it's business.

How best to proceed?

What are some methods I can implement that would allow me to guarantee that the async has done it's stuff before attempting to affect any of the objects that it creates?

Is there a way to 'halt' the GLThread until the asyncTask has finished? I've been told in the past to use a separate thread or an async for loading resources, but I'm confused as to how to handle it correctly and keep everything sync'd between it and the GL thread (if ""sync'd"" is the correct term in this context).

Any help would be appreciated.

Code example

Showing onCreate and doInBackground from Activity Class

    @Override
    protected void onCreate(Bundle savedInstanceState) {

        //Request full screen
        requestWindowFeature(Window.FEATURE_NO_TITLE);
        getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN, 
        WindowManager.LayoutParams.FLAG_FULLSCREEN);
        setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_LANDSCAPE);

        super.onCreate(savedInstanceState);

        //Create a displayMetrics object to get pixel width and height
        metrics = new DisplayMetrics();
        getWindowManager().getDefaultDisplay().getMetrics(metrics);
        width = metrics.widthPixels;
        height = metrics.heightPixels;

        //Create splash-screen object and pass in scaled width and height
        splash = new SplashScreen(MainActivity.this, width, height);

        //Create dialog that will show splash-screen 
        loading_dialog = new Dialog(MainActivity.this,android.R.style.Theme_Black_NoTitleBar_Fullscreen);

        //Create and set View
        myView = new MyGLSurfaceView(MainActivity.this);

        //Create a copy of the Bundle
        if (savedInstanceState != null){
            newBundle = new Bundle(savedInstanceState);         
        }

        //Create splash object

        goSplash = new DisplaySplash(newBundle);
        goSplash.execute(); //Start asyncTask

        setContentView(layout);

    }

And....

    @Override
    protected Void doInBackground(Void... params) {
        createObjects();
        initialise();           
        return null;
    }

Showing onSurfaceCreated from GLRenderer Class

@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {

    //Eek! Running before AsyncTask has finished on slower devices! :(
    res.loadResources();
    res.setTextures();
    res.RecycleBitmaps();

}
","22241","","","","","2014-06-23 08:38:45","Nullpointerexception when loading resources in openGL ES 2.0 Android project","","1","0","","","","CC BY-SA 3.0" "79144","1","79167","","2014-06-23 03:08:20","","2","1167","

I've looked up a lot of tutorials on YouTube and all of them only work for versions of Minecraft prior to 1.7.9.

I first got a Minecraft Coder Pack (MCP) off of this website, but then realized that only decompiles Minecraft 1.6.4. Then I found a more recent MCP (that's not on the website for some reason) and it is version 9.03, downloaded here. This decompiles Minecraft version 1.7.2 (when I followed this video's instructions, I run the decompile.bat file and it says Json file not found in C:\Users\mike\AppData\Roaming\.minecraft\versions\1.7.2\1.7.2.json).

Basically I can't decompile Minecraft 1.7.9, but I can decompile older versions. However, I don't have any older versions downloaded onto my computer. I have only 1.7.9.

Then I tried using Forge, but realized that most videos were using versions of Minecraft prior to 1.6.4, meaning they use the bin folder that does not exist anymore. Even after trying to figure that out as well, the decompiling would never work. I tried to do what this video did, but couldn't replicate it. Then I finally looked at this video about using Forge and I could replicate it, but this didn't decompile Minecraft. It just set up a workspace in Eclipse that I'm not sure how to use.

TL;DR

I can decompile Minecraft 1.6.4 and 1.7.2 but I can't decompile version 1.7.9. Should I download an older version of Minecraft, wait for an MCP for 1.7.9, or something else? Is there something I'm missing, where I actually can decompile and mod Minecraft 1.7.9?

","48470","","","","","2016-01-29 05:48:56","How can I mod Minecraft 1.7.9?","","1","4","1","","","CC BY-SA 3.0" "79148","1","79150","","2014-06-23 03:42:15","","5","3848","

Most folks who've played FPS games would have noticed that the same weapons tend to reappear again and again, such as machine guns and shotguns. There was once a lot of innovation in the types of weapons in the genre's early days (see Duke Nukem 3D or Unreal Tournament for example), but in recent years the weapons tend to be cosmetic variations on a few standard types (warning: tvtropes link).

Why has this happened? I believe a large reason is that these standard guns serve distinct gameplay roles or purposes, so that even if they are replaced with a gun that is named differently or looks different, they act almost the same. If so, what are these roles, and could they be replaced with a gun that acted differently? For example, is it possible to replace the shotgun with something that was not semi-automatic, or fired a burst of pellets, without affecting the role?

","26250","","7191","","2014-06-25 16:12:34","2016-08-25 03:34:52","Why do most FPS games have a machine gun, shotgun, and sniper rifle?","","5","5","","","","CC BY-SA 3.0" "79157","1","79195","","2014-06-23 07:29:22","","4","775","

I recently implemented MSAA in my deferred renderer, it looks good but I just got a feeling I might have done it wrong.

Here is what for example the directional light fragment shader looks like:

const float DEPTH_BIAS = 0.00005;                                                                                               \n \
                                                                                                                                \n \
layout(std140) uniform UnifDirLight                                                                                             \n \
{                                                                                                                               \n \
    mat4 mVPMatrix[4];                                                                                                          \n \
    mat4 mCamViewMatrix;                                                                                                        \n \
    vec4 mSplitDistance;                                                                                                        \n \
    vec4 mLightColor;                                                                                                           \n \
    vec4 mLightDir;                                                                                                             \n \
    vec4 mGamma;                                                                                                                \n \
    vec2 mScreenSize;                                                                                                           \n \
    int mNumSamples;                                                                                                            \n \
} UnifDirLightPass;                                                                                                             \n \
                                                                                                                                \n \
layout (binding = 2) uniform sampler2DMS unifPositionTexture;                                                                   \n \
layout (binding = 3) uniform sampler2DMS unifNormalTexture;                                                                     \n \
layout (binding = 4) uniform sampler2DMS unifDiffuseTexture;                                                                    \n \
layout (binding = 6) uniform sampler2DArrayShadow unifShadowmap;                                                                \n \
                                                                                                                                \n \
out vec4 fragColor;                                                                                                             \n \
                                                                                                                                \n \
void main()                                                                                                                     \n \
{                                                                                                                               \n \
    ivec2 texcoord = ivec2(textureSize(unifDiffuseTexture) * (gl_FragCoord.xy / UnifDirLightPass.mScreenSize));                 \n \
                                                                                                                                \n \
    vec3 worldPos = vec3(0.0), normal = vec3(0.0), diffuse = vec3(0.0);                                                         \n \
    for (int i = 0; i < UnifDirLightPass.mNumSamples; i++)                                                                      \n \
    {                                                                                                                           \n \
        worldPos += texelFetch(unifPositionTexture, texcoord, i).rgb;                                                           \n \
        normal   += texelFetch(unifNormalTexture, texcoord, i).rgb;                                                             \n \
        diffuse  += texelFetch(unifDiffuseTexture, texcoord, i).rgb;                                                            \n \
    }                                                                                                                           \n \
    worldPos /= UnifDirLightPass.mNumSamples;                                                                                   \n \
    normal   /= UnifDirLightPass.mNumSamples;                                                                                   \n \
    diffuse  /= UnifDirLightPass.mNumSamples;                                                                                   \n \
    normal = normalize(normal);                                                                                                 \n \
                                                                                                                                \n \
    vec4 camPos = UnifDirLightPass.mCamViewMatrix * vec4(worldPos, 1.0);                                                        \n \
                                                                                                                                \n \
    int index = 3;                                                                                                              \n \
    if (camPos.z > UnifDirLightPass.mSplitDistance.x)                                                                           \n \
        index = 0;                                                                                                              \n \
    else if (camPos.z > UnifDirLightPass.mSplitDistance.y)                                                                      \n \
        index = 1;                                                                                                              \n \
    else if (camPos.z > UnifDirLightPass.mSplitDistance.z)                                                                      \n \
        index = 2;                                                                                                              \n \
                                                                                                                                \n \
    vec4 projCoords = UnifDirLightPass.mVPMatrix[index] * vec4(worldPos, 1.0);                                                  \n \
    projCoords.w    = projCoords.z - DEPTH_BIAS;                                                                                \n \
    projCoords.z    = float(index);                                                                                             \n \
    float visibilty = texture(unifShadowmap, projCoords);                                                                       \n \
                                                                                                                                \n \
    float angleNormal = clamp(dot(normal, UnifDirLightPass.mLightDir.xyz), 0, 1);                                               \n \
                                                                                                                                \n \
    fragColor = vec4(diffuse, 1.0) * visibilty * angleNormal * UnifDirLightPass.mLightColor;                                    \n \
}                                                                                                                               \n"";
  1. I average the position/normals/diffuse before light and shadow calculations, but I just realised perhaps I should average the computed results instead. Which one is correct?

  2. Right now I'm using hardware PCF for my shadows. Since I'm fetching several samples anyway for MSAA, couldn't I use that for shadow filtering aswell? Any downsides?

","27464","","27464","","2014-06-23 13:28:56","2014-06-23 17:48:12","MSAA deferred implementation issue","","1","0","2","","","CC BY-SA 3.0" "79162","1","79177","","2014-06-23 09:20:08","","3","529","

I am trying to calculate the reflection of a laser within a polygon. My current calculations are probably quite long-winded because I'm building on line intersection and other functions. The problem is that I'm using a point (x,y) with velocity(x,y) and trying to calculate where the point is after each reflection off a line - this is a problem because when the point reflects within very small corners I can't seem to calculate the final location and velocity of the laser point.

Is there a well known algorithm for calculating laser reflection in 2D within polygons?

Note: I would post my code but as stated above it's extremely long ATM.

My general logic is:

Call method with particle {x,y,velocity={x,y}}
Begin loop
  Check for intersections
  If no intersections then exit
  Get closest intersection to particle
  Update particle location, direction and velocity
End loop
Refresh particle velocity (to maintain speed)
Return particle

I was hoping there something a bit more concise for this (basic?) math problem.

","26888","","","user1430","2018-05-14 15:55:09","2018-05-14 15:55:09","Calculate laser bounce inside polygon","<2d>","2","4","","","","CC BY-SA 4.0" "79165","1","79176","","2014-06-23 09:57:02","","0","119","

I've never written card games before and am currently coding up a simple card game, I have a deck of cards and they need to be shuffled (done) and then draw from the top deck into a tableau (centre of table) from which the user(s) can pick from.

The only issue is I'm not sure how I can figure out where the cards drawn from the top deck go to.

Let's say the draw deck is drawn at 100,100.

Let's say the tableau starts at 200,200 and can have 6 cards in them.

How do I know where the cards are going to appear on screen within the tableau?

My current thoughts are that we fill the tableau ""empty"" card objects at set X/Y co-ordinates so that I know without having to do too much programming where the cards should go to.

But I'm not sure if this is the best way to do it because of screen sizes, etc.

ie: Tableau. A card width is 50, and the distance between them is 20

> Card slot #1 = 200,200 (State: Empty)
> Card slot #2 = 270,200 (State: Empty) 
> Card slot #3 = 340,200 (State: Empty) 

So the first card drawn would animate from 100,100 to 200,200 and change it's state to Filled The second to 270,200, changing its state The third to 340,200, changing its state And so on..

But I'm still unsure if using hard number co-ordinates is really the best way forward, especially if the screen sizes change; or even if there's a better way to animate cards from a draw pile to a tableau.

My question therefore is -- how do I make the computer know where to put cards drawn from a deck of cards to a tableau?

Many thanks

","41259","","","","","2014-06-23 12:01:49","Where to place cards drawn from a deck of cards to a tableau","","1","0","","","","CC BY-SA 3.0" "79168","1","79170","","2014-06-23 11:03:03","","0","91","

Assuming I Have a List<SomeClass> myList; in my Game1-class. It also contains an object of SomeOtherClass otherClass;. How would I modify myList (Add/Remove) out of the otherClass's logic? Or in other words how to get access to the Game1-instance in this situation?

","10044","","","","","2014-06-23 11:17:51","How to modify Game1.cs out of an object it contains?","","1","1","","2014-06-25 16:17:12","","CC BY-SA 3.0" "79172","1","79206","","2014-06-23 11:44:19","","4","3655","

I have a three dimensional pyramid given by four vectors a, b, c, d and want to test if a given vector x is inside that region or not. Here is an image:

A related question can be found here: 2D problem.

How can I test to see if the vector is contained by the others?

","48486","","7191","","2014-06-23 14:28:29","2014-06-23 21:34:43","Checking if a vector is contained inside a viewing frustum","<3d>","2","3","2","","","CC BY-SA 3.0" "115269","1","115505","","2016-01-20 16:52:50","","1","156","

I've followed this tutorial https://developers.google.com/admob/android/quick-start, but I'm stucked at gradle configuration. I've added line

compile 'com.google.android.gms:play-services-ads:8.3.0'

to build.gradle (of application) like in tutorial, but error ""Failed to find: com.google.android.gms:play-services-ads:8.3.0"" appeared. Admob writers says, that android studio will offer download if that is missing, but I've only got error in output.

","75033","","75033","","2016-01-21 15:22:45","2016-01-24 17:01:02","Admob - gradle configuration error","","1","0","","","","CC BY-SA 3.0" "115270","1","115284","","2016-01-20 17:03:44","","0","97","

When I try to display an image that is 400 pixels wide and 800 pixels high, it is not displayed this way. Instead it is diplayed like this: Instead it is displayed like this

You can see at the bottom and a few pixels right to the phone some thin white lines, I did not add this manually and it is not part of the picture, the picture is perfectly cropped around the phone.

When in my fragment shader I add + vec4(1, 1, 1, .5) it shows the area that the phone should have covered. Image

Code for creation of an object that holds info about the image:

GuiTexture phone = new GuiTexture(loader.loadTexture(""phone_cropped""), Display.getWidth() / 2, Display.getHeight() / 2, 2, 400, 800);

The loadTexture method:

public int loadTexture(String fileName) {
    Texture texture = null;
    try {
        texture = TextureLoader.getTexture(""PNG"", new FileInputStream(""res/"" + fileName + "".png""));
    } catch (FileNotFoundException e) {
        e.printStackTrace();
    } catch (IOException e) {
        e.printStackTrace();
    }
    int textureID = texture.getTextureID();
    textures.add(textureID);
    return textureID;
}

The line that calls the render class to render the image

guiRenderer.render(guis);

The transformation matrices that the image is multiplied by:

float width = gui.getWidth() / 2;
        float height = gui.getHeight() / 2;

        Matrix4f modelMatrix = new Matrix4f();
        modelMatrix.m00 = 2.0f / (float) Display.getWidth();
        modelMatrix.m11 = 2.0f / (float) Display.getHeight();
        modelMatrix.m30 = -1;
        modelMatrix.m31 = 1;

        shader.loadModel(modelMatrix);

        Matrix4f transformationMatrix = Maths.createTransformationMatrix(
                new Vector2f(
                           gui.getxPos(),
                        - (gui.getyPos())
                ), gui.getRotation()
                ,new Vector2f(
                        width,
                        height
                )
        );

        shader.loadTransformation(transformationMatrix);

Vertex shader code:

gl_Position = modelMatrix * transformationMatrix * vec4(position, 0.0, 1.0);
textureCoords = vec2((position.x+1.0)/2.0, 1 - (position.y+1.0)/2.0);

This does not happen to all images, when I load in an image that is square and I try to display it with width 400 and height 800 it works perfectly. Image

With different images it yields different extra space.

All images are .png. The phone image is 1009x2057 pixels. I also tried to use a phone that was 2048 pixels high (1006x2048) since this is a power of 2, that still yields a white line on the side but does look better: image

The square (3rd image) that was elongated to look like a rectangle and does display correctly is 256 x 256 pixels.

All displayed images have a slight rotation because without rotation the little white thin lines don't always show up, the rotation did not change anything about the images.

To load images I use slick utils TextureLoader class.

","66077","","66077","","2016-01-20 18:51:43","2016-01-20 22:02:04","Image rendering with additional space around it","","1","0","","","","CC BY-SA 3.0" "115273","1","115274","","2016-01-20 19:19:32","","0","4163","

I am currently working on a small game project and I want to monetize my game with AdMob ads. So I watched this YouTube Tutorial.

It works fine, but the problem is ad appears in all the scenes. So I don't want that. I want the ads on certain scenes.

For example, look at the image below.

I want the ad on Instruction scene. So when the player clicks on Back To Main Menu button ad should disappear.

I tried

bannerView.Hide (); 
bannerView.Destroy();

but Unity gives some errors, it says:

error CS0103: The name bannerView does not exist in the current context

Here is the script:

using UnityEngine;
using System.Collections;

// AdMob
using GoogleMobileAds.Api;

public class Instruction_Level_Manager_Script : MonoBehaviour 
{
  // Audio
  public AudioClip My_Audio_Clip ;
  private AudioSource My_Audio ;

  // Use this for initialization
  void Start () 
  {
    My_Audio = GetComponent<AudioSource>();

    // AdMob Ad
    RequestBanner ();
  }

  // Update is called once per frame
  void Update () 
  {
  }

  // Back to Main Menu
  public void Back_To_Main_Menu(string Main_Menu)
  {
    //Playing Audio
    My_Audio.PlayOneShot(My_Audio_Clip);

    //Destroy when leaving the level
    bannerView.Hide (); 
    bannerView.Destroy();

    Application.LoadLevel (Main_Menu);
  }

  // AdMob Advertisement
  private void RequestBanner()
  {
    #if UNITY_ANDROID
    string adUnitId = ""ca-app-pub-xxxxxxxx/xxxxxxxx"";
    #elif UNITY_IPHONE
    string adUnitId = ""INSERT_IOS_BANNER_AD_UNIT_ID_HERE"";
    #else
    string adUnitId = ""unexpected_platform"";
    #endif

    // Create a 320x50 banner at the top of the screen.
    BannerView bannerView = new BannerView(adUnitId, AdSize.Banner, AdPosition.Top);
    // Create an empty ad request.
    AdRequest request = new AdRequest.Builder().Build();
    // Load the banner with the request.
    bannerView.LoadAd(request);
  }
}

So what is wrong with this script? I Googled it, but couldn't find satisfying solution. If this method is not possible, how can I destroy the ad?

I am using Unity 5.

","66773","","40264","","2016-01-20 19:42:06","2017-04-20 09:19:27","Unable to hide / destroy AdMob Ads in Unity","","3","1","","","","CC BY-SA 3.0" "115277","1","115374","","2016-01-20 20:48:06","","3","1349","

What I want is basically: A way to blur every object/sprite on the scene, but have a ""blur-free"" circular zone, that can move. And everything that's behind that circular zone won't have the blur effect applied to it.

In a 2D mobile game, how would I do that, especially in a way that's not too heavy, performance-wise(if possible).

And if it's not possible to do that in a way that won't completely destroy my performance, I also have those sprites already ""pre-blurred"" so maybe there's a way to have both blurred and ""unblurred"" objects at the same position, and only draw the right parts of them as they go through the scene and reach the blur-free zone. If there's a way to do that, that'd also help immensely.

Thanks for your time.

","77920","","","","","2016-01-22 14:15:17","How to blur entire scene but a specific spot in Unity?","<2d>","1","3","","","","CC BY-SA 3.0" "115280","1","115282","","2016-01-20 21:19:05","","0","112","

In the slides over here by NVidia, they describe methods for BRDF compression. They first create a BRDF matrix where each column(or row) corresponds to a single light direction (or outgoing view direction). This matrix is then compressed by decomposing it either by using SVD or Normalized decomposition. My question is that they claim SVD gives better results than Normalized decomposition for similar compression sizes. Does anyone know what could be the possible reason for this?

","39506","","39518","","2018-02-25 18:05:05","2018-02-25 18:05:05","SVD vs Normal decomposition for BRDF compression","","1","0","1","","","CC BY-SA 3.0" "115293","1","115306","","2016-01-21 04:35:34","","-4","978","

Planning to use some sprites for a fan project but the sprites I want haven't been ripped yet; how can I extract them in a usable form?

","77950","","","user1430","2016-01-21 16:44:51","2016-01-21 16:44:51","How can I extract sprites from a SNES games?","<2d>","1","6","","2016-01-21 16:47:54","","CC BY-SA 3.0" "115297","1","115310","","2016-01-21 08:06:08","","2","254","

I have two classes:

  • JoystickView (Extends View).
  • GameView (Extends SurfaceView): It will be updated by a Thread that call the onDraw Method.

Now, if I retrieve a direction by OnTouchEvent inside the JoystickView, how can send this information to the GameView? Can I use another thread for JoystickView and allow a communication between the thread of Joystick and the thread of the GameView? otherwise how can I do this?

","77958","","76238","","2016-01-21 10:07:20","2016-01-21 12:07:33","How can I communicate a Joystick with a SurfaceView?","","1","0","","","","CC BY-SA 3.0" "115307","1","115320","","2016-01-21 11:51:35","","1","1350","

Right now I'm using Rigidbody2d for the game character along with polygon collider 2D(2d platform game).

I'm beginner hope I'm using correct components.

And transform to move character from left to right.

Character.transform.Translate(Vector2.right * speed * Time.deltaTime);

And upward(jump).

Character.transform.Translate(Vector2.up * speed * Time.deltaTime);

When the game character moves on slopes there's lot of friction and bounciness and rotation and its worse when jumping from slopes.

","75648","","75648","","2016-01-23 08:14:27","2016-01-23 08:14:27","How to make rigid body move smoothly on uneven platform?","","2","1","","","","CC BY-SA 3.0" "115323","1","115326","","2016-01-21 14:37:46","","5","1128","

I've started working on a demo for my 2.5D game. For a basic scene I figured I would just use the good old fashioned doom sprites textured to a double sided plane. Simple enough. But I've been scratching my head as to how to display the proper sprite animation relative to the player viewing the sprite...if that makes since.

For example suppose I have three players in the scene Players A, B, and C as such:

Player C should see player A as: Here's an example of what I think player A should look like from C's perspective:

While player B would see player A as follows:

My current idea is that for any given player, I will need to know the current direction of each other player, and where they are located/moving to relativly. From this, I could calculate which sprite image is required.

Is this correct? What improvements/optimizations could be made?

** EDIT ** Total number of directions will be 8 as depicted below:

For each direction there are two sprites (to show walking movement, sprites for the right hand directions are mirrors of the left):

** EDIT 2 ** Got this working and have a simple javascript/threejs example for anyone interested: https://github.com/commanderZiltoid/threejs-2.5d-fps

","68760","","68760","","2016-01-27 22:39:24","2016-01-27 22:39:24","Doom-style 2.5D Movement Animations","<2.5d>","1","2","2","","","CC BY-SA 3.0" "115327","1","115330","","2016-01-21 15:45:36","","4","1043","

I would like to know one (of the probably many) ways to code a (sort of) Prison-Architect-ish electricity cable building system.

Here's a picture of what I mean:

  1. How can I detect that cables are connected?
  2. How can I detect if the block it's connected to is a power source?

And these 2 (above) will all be placed inside a dictionary when they are placed.

That dictionary does not yet have a key type, but the second type is a sprite.

How should I do this?

","75341","","40264","","2016-01-21 16:22:52","2016-01-21 16:33:48","How to do placeable electricity cables that work like the ones in ""Prison Architect""","","2","2","1","","","CC BY-SA 3.0" "115339","1","115357","","2016-01-21 19:37:39","","2","275","

I'm currently learning game development in Unity from this course on Lynda.com. Currently I'm trying to displaying the time remaining in the game after it has been set to 5 minutes initially. When I look at the scene, I can see the text for the timer displayed in the top left corner of the canvas, but when I run the game, I'm not seeing it at all.

I made a script for a game manager which is derived from a Singleton class. The game manager contains a private variable (and an accessor method) for the time remaining. I have another script that accesses the value for the time remaining and displays it on screen. In the Unity editor, I added a UI game object for the text box and then added the text box to the GUI representation of the timer label attribute. It seems like everything should be working but since I'm still very new to this, I'm probably missing something simple. Here is the code for both scripts:

GameManager.cs

public class GameManager : Singleton<GameManager> {
    private float _timeRemaining;

    public float TimeRemaining
    {
        get { return _timeRemaining; }
        set { _timeRemaining = value; }
    }

    private float maxTime = 5 * 60; // In seconds.


    // Use this for initialization
    void Start () {
        TimeRemaining = maxTime;
    }
    // Update is called once per frame
    void Update () {
        TimeRemaining -= Time.deltaTime;
        if(TimeRemaining <= 0)
        {
            //Now Deprecated
            //Application.LoadLevel(Application.loadedLevel);
            SceneManager.LoadScene(SceneManager.GetActiveScene().buildIndex);
            TimeRemaining = maxTime;
        }
    }
}

UpdateUI.cs

public class UpdateUI : MonoBehaviour {

    [SerializeField]
    private Text timerLabel;

    // Use this for initialization
    void Start () {

    }

    // Update is called once per frame
    void Update () {
        timerLabel.text = FormatTime(GameManager.Instance.TimeRemaining);
    }

    private string FormatTime(float timeInSeconds)
    {
        return string.Format(""{0}:{1:00}"", Mathf.FloorToInt(timeInSeconds / 60), Mathf.FloorToInt(timeInSeconds % 60));
    }
}

EDIT:

Canvas Settings in the inspector

Image of the Time Remaining displayed in the scene

","77985","","77985","","2016-01-21 22:36:38","2016-01-22 06:06:41","Unity 5 - Time Remaining displaying in scene but not in Game","","1","4","","","","CC BY-SA 3.0" "115341","1","115343","","2016-01-21 20:21:07","","1","1188","

I have two pixes stored in a 32 bit unsigned integer (4 bytes per pixel, using SDL's types for convenience):

Uint32 pixel1;        // Source pixel, format: SDL_PIXELFORMAT_BGRA8888
Uint32 pixel2_format; // Format of the pixel2 (can change)
Uint32 pixel2;        // Destination pixel

How can I convert the format from pixel1 to pixel2?

","62216","","62216","","2016-01-26 08:29:15","2016-01-26 08:31:29","SDL Convert pixel format","","2","0","","","","CC BY-SA 3.0" "115346","1","115358","","2016-01-21 22:04:16","","0","198","

How do I close the GUI after the user enters a value? Is there any way to stop execution of the entire app (from inside this script) until a value is entered?

using UnityEngine;
using System.Collections;
using UnityEngine;
using UnityEditor;

public class Popup : MonoBehaviour {

    // Use this for initialization
    void Start () {

    }

    // Update is called once per frame
    void Update () {

    }

    string record="""";

    void OnGUI() {
        //Participant Number/Record:
        GUILayout.Label (""enter participant id:"");
        record = GUILayout.TextField(record);
        //GUILayout.TextField()

        if (GUILayout.Button(""Submit"")) {
            OnClickSavePrefab();
            GUIUtility.ExitGUI();
        }
    }

    void OnClickSavePrefab() {
        record = record.Trim();

        if (string.IsNullOrEmpty(record)) {
            EditorUtility.DisplayDialog(""Unable to save record"", ""Please specify a valid participant record."", ""Close"");
            return;
        }
        // Save your prefab
        Debug.Log (""record:"" + record);
    }
}
","77992","","77992","","2016-01-22 01:11:00","2016-01-22 06:22:08","closing GUI in Unity","","1","1","","","","CC BY-SA 3.0" "115349","1","116373","","2016-01-22 00:05:20","","4","590","

I see in the doc that the limit for the number of leaderboards on Google Play Services is 70. However, I am able to create more than 70 leaderboards and they are all working when I test my app. Is this limit still up to date? Does this mean that if I publish my game some of the leaderboards will be deactivated or something?

Thank you in advance if you know the answer. I just want to make sure of what I can expect for when I publish my game.

","77999","","","","","2016-02-12 03:51:32","Google leaderboards, up to a maximum of 70?","","3","0","4","","","CC BY-SA 3.0" "115355","1","115558","","2016-01-22 04:09:51","","1","1459","

I am trying to create a little shadow mapping demo.

My code is currently divided into three rendering passes:

  • Pass 1 - Create the depth texture that will be used for shadow mapping on an offscreen framebuffer
  • Pass 2 - (Attempt to) render the scene with shadows using that depth texture
  • Pass 3 - Display the shadow map in the upper right corner (debug purposes)

I have successfully created and rendered a depth texture (passes 1 and 3). However, I am struggling to render the scene with shadows from the camera's POV. There is some flickering at the edges of thick floor plane as well as on one cube corner. The results look nothing like shadows and the scene looks fully lit. I am thinking that either my second pass or my shaders for that pass that I am using to display the scene are incorrect, but I cannot seem to find the error. It is a relatively short demo written using python 2.7, opengl 2.1, and GLSL 120. Below is the main method that includes most of the relevant code (aside from some window creation, shader compilations, and primitive matrix math libraries I wrote):

from window import Window
from shader import Shader
from mat4 import Mat4
from vec3 import Vec3

from OpenGL.GL import *
from OpenGL.GLU import *
import math
import numpy as np
from PIL import Image

#Loads my special model file format 
#Basically a super-simplified obj without indexing
def loadAA7(dataUrl):
        vData = []
        tData = []
        nData = []
        inFile = open(dataUrl, ""r"")
        for line in inFile.readlines():
            lineList = line.strip().split(""\t"")
            vData.extend([float(v) for v in lineList[0:3]])
            tData.extend([float(v) for v in lineList[3:5]])
            nData.extend([float(v) for v in lineList[5:]])
        vertexData = np.array(vData, dtype=np.float32)
        texCoordData = np.array(tData, dtype=np.float32)
        normalData = np.array(nData, dtype=np.float32)
        return vertexData, texCoordData, normalData

def createMeshBuffers(vertices, texCoords, normals):
    v, t, n = vertices, texCoords, normals
    vbo, tbo, nbo = glGenBuffers(3)
    glBindBuffer(GL_ARRAY_BUFFER, vbo)
    glBufferData(GL_ARRAY_BUFFER, len(v)*4, v, GL_STATIC_DRAW)
    glBindBuffer(GL_ARRAY_BUFFER, tbo)
    glBufferData(GL_ARRAY_BUFFER, len(t)*4, t, GL_STATIC_DRAW)
    glBindBuffer(GL_ARRAY_BUFFER, nbo)
    glBufferData(GL_ARRAY_BUFFER, len(n)*4, n, GL_STATIC_DRAW)
    glBindBuffer(GL_ARRAY_BUFFER, 0)
    return vbo, tbo, nbo

if __name__ == ""__main__"":
    window = Window(""Shadow Mapping Test"", 800, 600, 60)
    glClearColor(0.0, 0.0, 0.0, 1.0)
    glEnable(GL_CULL_FACE)
    glEnable(GL_TEXTURE_2D)
    glEnable(GL_DEPTH_TEST)
    time = 0

    v, t, n = loadAA7(""./data/blockworld.aa7"")
    vbo, tbo, nbo = createMeshBuffers(v, t, n)
    shadowMapShader = Shader(""./shaders/shadowMap.vert"", ""./shaders/shadowMap.frag"")
    shadowMapShader.compile()
    displayShader = Shader(""./shaders/display.vert"", ""./shaders/display.frag"")
    displayShader.compile()

    img = Image.open(""./data/blockworld.png"")
    imgWidth, imgHeight = img.size
    imgData = np.array(img)
    modelTex = glGenTextures(1)
    glBindTexture(GL_TEXTURE_2D, modelTex)
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imgWidth, imgHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, imgData)
    glBindTexture(GL_TEXTURE_2D, 0)

    rendertarget = glGenTextures(1)
    glBindTexture(GL_TEXTURE_2D, rendertarget)
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
    glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 512, 512, 0, GL_DEPTH_COMPONENT, GL_FLOAT, None)
    fbo = glGenFramebuffers(1)
    glBindFramebuffer(GL_FRAMEBUFFER, fbo)
    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, rendertarget, 0)
    glBindFramebuffer(GL_FRAMEBUFFER, 0)

    lightPos = Vec3(150, 150, 0)
    cameraPos = Vec3(0, 200, -300)

    while True:
        window.update()
        time += 1

        #Pass 1: Render to Texture
        shadowMapShader.enable()
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
        lightProj = Mat4().perspective(60, float(512)/512, 1, 1000)
        lightView = Mat4().lookAt(lightPos, Vec3(0, 0, 0), Vec3(0, 1, 0))
        modelMatrix = Mat4().rotateY(time)
        glMatrixMode(GL_PROJECTION)
        glLoadMatrixf(lightProj.data)
        glMatrixMode(GL_MODELVIEW)
        glLoadMatrixf(lightView.data)
        glMultMatrixf(modelMatrix.data)
        glBindFramebuffer(GL_FRAMEBUFFER, fbo)
        glViewport(0, 0, 512, 512)
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
        glEnableClientState(GL_VERTEX_ARRAY)
        glVertexPointer(3, GL_FLOAT, 0, v)
        glDrawArrays(GL_TRIANGLES, 0, len(v)/3)
        glDisableClientState(GL_VERTEX_ARRAY)
        shadowMapShader.disable()

        #Pass 2: Render the scene with shadows
        bias = Mat4([0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.5, 1.0])
        biasMVPMatrix = bias.mul(lightProj).mul(lightView).mul(modelMatrix)
        glViewport(0, 0, 800, 600)
        glBindFramebuffer(GL_FRAMEBUFFER, 0)
        cameraProj = Mat4().perspective(30, float(800)/600, 1, 1000)
        cameraView = Mat4().lookAt(cameraPos, Vec3(0, 0, 0), Vec3(0, 1, 0))
        modelMatrix = Mat4().rotateY(time)
        glMatrixMode(GL_PROJECTION)
        glLoadMatrixf(cameraProj.data)
        glMatrixMode(GL_MODELVIEW)
        glLoadMatrixf(cameraView.data)
        glMultMatrixf(modelMatrix.data)
        displayShader.enable()
        glActiveTexture(GL_TEXTURE1)
        glBindTexture(GL_TEXTURE_2D, rendertarget)
        glActiveTexture(GL_TEXTURE0)
        glBindTexture(GL_TEXTURE_2D, modelTex)
        displayShader.setUniform(""u_modelTexture"", ""sampler2D"", 0)
        displayShader.setUniform(""u_shadowMap"", ""sampler2D"", 1)
        displayShader.setUniform(""u_biasMVPMatrix"", ""mat4"", biasMVPMatrix.data)
        glEnableClientState(GL_VERTEX_ARRAY)
        glEnableClientState(GL_TEXTURE_COORD_ARRAY)
        glVertexPointer(3, GL_FLOAT, 0, v)
        glTexCoordPointer(2, GL_FLOAT, 0, t)
        glDrawArrays(GL_TRIANGLES, 0, len(v)/3)
        glDisableClientState(GL_VERTEX_ARRAY)
        glDisableClientState(GL_TEXTURE_COORD_ARRAY)
        displayShader.disable()

        #DEBUG: Display the render texture
        glViewport(0, 0, 800, 600)
        glBindFramebuffer(GL_FRAMEBUFFER, 0)
        glMatrixMode(GL_PROJECTION)
        glLoadIdentity()
        glOrtho(-1, 1, -1, 1, -1, 1)
        glMatrixMode(GL_MODELVIEW)
        glLoadIdentity()
        glActiveTexture(GL_TEXTURE0)
        glBindTexture(GL_TEXTURE_2D, rendertarget)
        glBegin(GL_QUADS)
        glColor3f(1,1,1)
        glTexCoord2f(0, 0); glVertex3f(0.5, 0.5, 0)
        glTexCoord2f(1, 0); glVertex3f(1, 0.5, 0) 
        glTexCoord2f(1, 1); glVertex3f(1, 1, 0)
        glTexCoord2f(0, 1); glVertex3f(0.5, 1, 0)
        glEnd()

I am also including my shadows for pass 1 (shadowMap.vert/frag) and pass 2 (display.vert/frag) in case the error is in one of these, but they seem to make sense to me (pass 1 outputs linearizes fragment depth while pass 2 transforms the vertices with a biased light space matrix before performing a depth comparsion between the depth texture and the scene).

shadowMap.vert

#version 120

void main()
{
    gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
}

shadowMap.frag

#version 120                                                     

void main()                                                                         
{
    float z = gl_FragCoord.z;
    float n = 1.0;
    float f = 1000.0;
    //convert to linear values   
    //formula can be found at www.roxlu.com/2014/036/rendering-the-depth-buffer 
    float c = (2.0 * n) / (f + n - z * (f - n));                             
    gl_FragDepth = c;          
}

display.vert

#version 120

uniform mat4 u_biasMVPMatrix;
varying vec4 v_shadowCoord;

void main()
{
    mat4 bias = mat4(0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.5, 1.0);
    gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
    v_shadowCoord = u_biasMVPMatrix * gl_Vertex;
    gl_TexCoord[0] = gl_MultiTexCoord0;
}

display.frag

#version 120
uniform sampler2D u_modelTexture;
uniform sampler2D u_shadowMap;
varying vec4 v_shadowCoord;

void main()                                                                         
{
    vec3 projCoords = v_shadowCoord.xyz/v_shadowCoord.w;
    float closestDepth = texture2D(u_shadowMap, projCoords.xy).r;
    float currentDepth = projCoords.z;  
    float shadow = currentDepth > closestDepth  ? 1.0 : 0.0;
    gl_FragColor = shadow * texture2D(u_modelTexture, gl_TexCoord[0].xy);
}

Update: Here is a picture of the results achieved when using this as the pass 2 fragment shader:

gl_FragColor = vec4(v_shadowCoord.xyz/v_shadowCoord.w, 1.0);

I also found it interesting that I get ""shadows"" when I use this as the shader instead (but I have no idea what it mean):

#version 120                                                     

uniform sampler2D u_modelTexture;
uniform sampler2D u_shadowMap;
varying vec4 v_shadowCoord;

void main()                                                                         
{
    vec3 projCoords = v_shadowCoord.xyz/v_shadowCoord.w;
    float closestDepth = texture2D(u_shadowMap, projCoords.xy).z;
    gl_FragColor.rgb = vec3(closestDepth);
    gl_FragColor.a = 1.0;
}

","78005","","78005","","2016-01-23 18:31:12","2016-01-25 15:38:04","Implementing Shadow Mapping in Python and OpenGL 2.1","","1","7","1","","","CC BY-SA 3.0" "115361","1","115363","","2016-01-22 07:26:18","","3","272","

I have the following code for my HLSL pixel shader. Modified from another post here on GameDev (Link), but I have a few problems with it:

// calculate UV and get texture and normal.
float2 UV = Input.position.xy / Input.ScreenSize.xy;
float4 DiffuseColor = ColorTexture.Sample( SampleType, UV );
float4 NormalColor = NormalTexture.Sample( SampleType, UV );
float4 normal = 2.0f * NormalColor - 1.0f;

// calculate distance.
float3 LightDir = LightPos - Input.position.xyz;
float distance = 1 - length( LightDir ) / LightRadius;

// calculate dot of normal and light direction.
float NdL = max( 0, dot( normal.xyz, LightDir ) );

// get final color.
float4 finalColor = ( DiffuseColor * ambientIntensity ) + DiffuseColor * distance * LightIntensity * LightColor * NdL;

return float4( finalColor.rgb, DiffuseColor.a );

ambientIntensity and LightIntensity are currently 1.0f to simplify things. LightRadius depends on the size of the geometry, but is in pixels. ~80 for testing. DiffuseColor is a texture resulting from applying self-illumination and darkening the base image. NormalColor is the normal map for the image.

This shader needs to run for every light on the screen, which isn't that many.

For my tests I use the images from the blog post here. It's a little old, but it is pretty much what I need to do, which I can't.

Now; I did enough testing to know that all input and cbuffer parameters are valid but the results I get are not what I hoped for.

Here are my problems:

  • Problem 1: My knowledge of math is limited and I don't understand exactly what the dot product does, why it is needed in the light calculation or why the distance is calculated as it is. Is there a resource online that would explain lighting in term of HLSL or at least in a way meant for a programmer and not using big equations meant for a mathematician? If I don't understand the basics, I'm afraid I wont be able to improve and I'll be stuck every time I need to add new features (ie: shadows), but I need to get this things running asap.

  • Problem 2: The results I have are not good. When I move the light around, I can actually see where the geometry cuts off, and it is as if the light does not trail off properly near the edges of the geometry. Are the calculations wrong?

  • Problem 3: I built a 2D shape editor that I use to design the shape of the lighting geometry, but I don't know how to edit the shaders or the geometry's UV to have the custom shapes (fans, circles, etc) shaded properly. The light I render is always shaded as if it was a circle.

I spent the last two weeks working on this and whenever I search on Google, I keep getting pages I have already visited, so I'm quite lost right now. Any suggestions would be highly appreciated.

My game is a tile-based 2D game, thought I would mention that.

","24101","","-1","","2017-04-13 12:18:49","2016-01-22 08:39:25","2D deferred lighting calculations not working","<2d>","1","2","","","","CC BY-SA 3.0" "115367","1","122119","","2016-01-22 12:00:28","","1","1247","

In my Gamemaker: Studio game, I have a collision script for my enemy. The enemy is an alien, oSwarmer, moving through space which is also filled with drifting, spinning bits of debris. Some debris barely spins at all, others spin quite fast.

When oSwarmer executes the following collision script every step. As you can see, if it is about to come into contact with some debris (here called oSolid) it changes direction away from the colliding oSolid and maintains its original speed.

SolidTouching = instance_place(x + hspeed, y + vspeed, oSolid)

if instance_exists(SolidTouching) {

OriginalSpeed = CurrentSetSwarmerSpeed

if speed > 0 {
direction = direction - random_range(130,220)
} else if speed <= 0 {
direction = point_distance(x,y,SolidTouching.x,SolidTouching.y) - random_range(130,220)
}
speed = OriginalSpeed
}

However, sometimes when coming into contact with an oSolid, they spin madly on the spot. I've noticed that this seems to occur when the oSolid is spinning. These creatures do not move very fast, so their initial 'bounce' is not enough to carry them out of the path of the rest of the oSolid which is coming around to meet them. I think they then get caught in a loop of constantly trying to move in the opposite direction to the colliding oSolid (which is changing every step because it is spinning).

What I cannot work out is how to remedy this. Does anybody have a suggestion?

","72795","","1929","","2016-03-22 23:26:18","2016-05-29 17:51:16","Detecting Collision in Next Step when Collider is rotating","","2","3","","","","CC BY-SA 3.0" "149441","1","149454","","2017-10-11 14:19:35","","2","365","

I am fairly new to Unity 5, as I was browsing through some prefabs, I found one for the Main menu in which UnityEvent was used in scripts to list down the menu options.

This UnityEvent was an array UnityEvent [ ] Event, which was nowhere initialized to anything but was showing no compiler errors and also during run, it was pointing to some predefined sets which the author was mentioning in the user manual... My questions are :

  • Is it possible to use UnityEvent Array without mentioning the size of the array.

  • What happens to the UnityEvent if we don't mention any of the events, but the Invoke functionality is called during runtime.

Btw, the prefab i used from was :Main Menu with Parallax effects

","108031","","","","","2017-10-12 01:13:46","UnityEvent Array coding C#","","1","1","","","","CC BY-SA 3.0" "149442","1","149663","","2017-10-11 14:25:13","","2","342","

Maybe I'm just googling the wrong thing but what's Unreal's default audio sample rate? 44.1kHz or 48kHz? Silly simple question but thanks for helping out!

","108215","","108215","","2017-10-19 18:22:32","2017-10-19 18:22:32","What is the default audio bitrate in Unreal 4?","