_id
int64 0
49
| text
stringlengths 71
4.19k
|
---|---|
37 | Help implementing virtual d pad Short Version I am trying to move a player around on a tilemap, keeping it centered on its tile, while smoothly controlling it with SneakyInput virtual Joystick. My movement is jumpy and hard to control. What's a good way to implement this? Long Version I'm trying to get a tilemap based RPG "layer" working on top of cocos2d iphone. I'm using SneakyInput as the input right now, but I've run into a bit of a snag. Initially, I followed Steffen Itterheim's book and Ray Wenderlich's tutorial, and I got jumpy movement working. My player now moves from tile to tile, without any animation whatsoever. So, I took it a step further. I changed my player.position to a CCMoveTo action. Combined with CCfollow, my player moves pretty smoothly. Here's the problem, though Between each CCMoveTo, the movement stops, so there's a bit of a jumpiness introduced between movements. To deal with that, I changed my CCmoveTo into a CCMoveBy, and instead of running it once, I decided to have it CCRepeatForever. My plan was to stop the repeating action whenever the player changed directions or released the d pad. However, when the movement stops, the player is not necessarily centered along the tiles, as it should be. To correctly position the player, I use a CCMoveTo and get the closest position that would put the player back into the proper position. This reintroduces an earlier problem of jumpiness between actions. What is the correct way to implement a smooth joystick while smoothly animating the player and keeping it on the "grid" of tiles? Edit It turns out that this was caused by a "Bug Fix" in the cocos2d engine. |
37 | What should I use for the event name when logging metrics with Flurry? I'm working on a multiplayer game and I want to use Flurry to record game events. In the game you can build, grow and train troops. In Flurry you can log an event (optionally with parameters). It would be great to be able to track the progress of a player depending upon what kinds of things they build, grow and so on and at what experience levels they do that at, but I don't know whether it's best to just have different sets of events for different types of actions... like this NSDictionary buildParams NSDictionary dictionaryWithObjectsAndKeys "Build Type", "Castle", "EXP", "234523", Capture user status nil Flurry logEvent "Build Action" withParameters buildParams And so on... so another for growing something, another for training, et cetera. Or is it better to have one global type of record of an event like this NSDictionary actionParams NSDictionary dictionaryWithObjectsAndKeys "Action", "Build", "Type", "Castle", "EXP", "44234", nil Flurry logEvent "Game Action" withParameters actionParams In this method all the data is stored in the one event type. I'm not sure which will give me the best result when it comes to funneling, segmentation, and all that? |
37 | Mouse joint isn't restricting the ball from going to the other part of the screen I'm developing a application in cocos2d using the Box2D framework, but unfortunately I am having issues. I'm not able to restrict the orange ball in the half screen area, taking the image below as reference, I don't want to allow the ball to go to the opposite part of the screen. I'm using the b2MouseJoint to move the ball around the screen. b2PrismaticJointDef seems to restrict on any particular axis, but I need to restrict on a particular rectangular area of the screen. |
37 | How to port a game from IPad to Iphone I just finished developing a 2D side scroll game for the IPad using Cocos2d and Box2d. Now we want to make an IPhone 4 version of the game, but I'm still not sure what is the best way to do it. I was thinking to just create a new target on my project, remove all the current resources from the Build Phases and create a new Resources folder with all the assets of the game scaled to the new resolution. Of course this approach would take a lot of work, specially from the designer, since there are a lot of images on the game, 30 levels with 3 layers of parallax, and over 100 animations between all the elements, plus the SVG levels that map physics world (we are using level SVG parser). I would also have to check all the touch functions and the sprites with a hard coded position (mostly Hud and the Menu). I did tried to use the win Size for this, but I'm pretty sure there are sprites with fixed positions. I know that checking the Menu and Hud position should not take much time, but I'm worried about the time it could take to scale all the images, and specially the SVG files that map the physics of the game. I would like to know if I'm in the right path regarding the porting of the game or if there if someone have though of a better easy solution. I tried modifying the gluLookAt eye z parameter, but this didn't work really well. I haven't work with OpenGL since college around 3 years ago, and I don't remember very well the math behind the low level pipeline of it. Anyway, I don't think that modifying the Cocos2d rendering steps could work for my problem. |
37 | Updating games for iOS 6 and new iPhone iPod Touch Say I have a game that runs full screen on iPhone 4S and older devices. The balance of the game is just right for the 480 x 320 screen and associated aspect ratio. Now I want to update my game to run full screen on the new iPhone iPod Touch where the aspect ratio of the screen is different. It seems like this can be challenging for some games in terms of maintaining the "balance". For example if the extra screen space was just tacked onto the right side of Jet Pack Joyride the balance would be thrown off since the user now has more time to see and react to obstacles. Also it could be challenging in terms of code maintenance. Perhaps Jet Pack Joyride would slightly increase the speed of approaching obstacles when the game is played on newer devices. However this quickly becomes messy when extra conditional statements are added all over the code. One solution is to have some parameters that are set in once place at start up depending on the device type. What are some strategies for updating iOS games to run on the new iPhone and iPod Touch? Edit more info I wonder what happens with a game like Plants vs Zombies? There's quite a bit of artwork that's probably sized to the 320x480 screen. Perhaps the lawn background's width can be increased while the size of the zombies and the row heights stay the same. |
37 | Detecting right left collisions with a bounding box I'm building a platformer in cocos2d, my first game project. I'm working on movement and collision detection. I'm using a tilemap with a "meta" layer of invisible blocks that are designated collidable. Everything seems to work pretty well except for one minor detail when the character jumps from above into a platform, he does not fall but clips through it. If he jumps from below, it recognizes it and resets his Y velocity accordingly. I know that it's because of how I'm detecting the collisions, and just doing a simple comparison of the origin.y's, but I'm unsure of how to refactor this code better (void) update (ccTime) dt CGPoint scaledVelocity ccpMult(leftJoystick.velocity, 35.0f) CGPoint newPosition ccp(player.position.x scaledVelocity.x dt, player.position.y) CGPoint oldPosition player position float velocityX player.Velocity.x scaledVelocity.x dt float velocityY player.Velocity.y BOOL isCollision NO BOOL isCollisionBelow NO BOOL isCollisionRight NO BOOL isCollisionLeft NO NSMutableArray collisionSprites NSMutableArray alloc init if (oldPosition.x gt newPosition.x) player.flipX YES else if (oldPosition.x lt newPosition.x) player.flipX NO for (CCSprite sprite in gameplayLayer.collidableTiles) if (CGRectIntersectsRect(player.boundingBox, sprite.boundingBox)) isCollision YES collisionSprites addObject sprite Figure out which direction collision is in if (isCollision) for (CCSprite collisionSprite in collisionSprites) if (collisionSprite.boundingBox.origin.y lt player.boundingBox.origin.y) isCollisionBelow YES if ((collisionSprite.boundingBox.origin.x gt player.boundingBox.origin.x) amp amp (collisionSprite.boundingBox.origin.y gt player.boundingBox.origin.y)) isCollisionRight YES if ((collisionSprite.boundingBox.origin.x lt player.boundingBox.origin.x) amp amp (collisionSprite.boundingBox.origin.y gt player.boundingBox.origin.y)) isCollisionLeft YES if (isCollisionLeft) velocityX 0.5f else if (isCollisionRight) velocityX 0.5f if (isCollision amp amp !isCollisionBelow amp amp player.characterState ! kStateFalling) velocityY 0 velocityY 0.8f player setCharacterState kStateFalling else if (!isCollisionBelow) velocityY 0.7f else if (isCollisionBelow) velocityY 0 player setCharacterState kStateIdle if (rightButton.active) if (player.characterState ! kStateJumping) velocityY 15.0f player setCharacterState kStateJumping Ground friction if (player.Velocity.x gt 0) if (player.Velocity.x lt 0.5f) velocityX 0 else velocityX 0.5f if (player.Velocity.x lt 0) velocityX 0.5f player setVelocity ccp(velocityX, velocityY) player setPosition ccp(player.position.x player.Velocity.x, player.position.y player.Velocity.y) gameplayLayer setViewpointCenter newPosition player setOldPosition player.position How can I refactor this code to work properly from all angles and reduce the complexity (I feel that there's too many booleans, and they all get a bit confusing). |
37 | Game server for an android iOS turn based board game I am currently programming an iPhone game and I would like to create an online multiplayer mode. In the future, this app will be port to Android devices, so I was wondering how to create the game server? First at all, which language should I choose? How to make a server able to communicate both with programs written in objective c and Java? Then, how to effectively do it? Is it good if I open a socket by client (there'll be 2)? What kind of information should I send to the server? to the clients? |
37 | Best way to learn iphone game development? I know php and some python, but not much c. Where should I start to learn iphone game development? Is there some recommended books tutorials for beginners? I'm looking at using cocos2d but I'm open to anything that isn't too limited. |
37 | TiledMaps and real objects I'm creating a simple game for the iPhone using Cocos2d. I'm having a hard time understanding how add real objects to a tiled map. I know that you can add an object layer to the tiled map and set the position of it, but I'm not sure how to define WHAT object it should be. For example, if I'm creating a simple Super Mario Bros. I want to add different blocks to the map, one should hold a star, one should hold a coin, and so on. How should I define what object it is? So my question, how do I correctly create class objects in a tiled map? |
37 | Is good practice to optimize FPS even when it's above the lower limit to give illusion of movement? I started over 50 FPS on the iPhone, but now I'm bellow 30 PFS, I've seen most iPhone games clamped to either 60 or 30 FPS, even when 24 or less would give the illusion of movement. I've concidered my limit to be a little bit over 15 FPS, in fact my physics simulation is updated at that rate (15.84 steps s) as that is the lowest that still give fluid movement, a bit lower gives jerky motion. Is there a practical reason why to clamp FPS way above the lower limit? Update The following image could help to clarify I can independently set the physic simulation step, frame rate, and simulation interval update. My concern is why should I clamp any of those to values greater than the minimum? For instance to conserve battery life I could just to choose the lower limits, but it seems that 60 or 30 FPS are the most used values. |
37 | iPhone app activated before associated in app purchase activated? I recently submitted my iPhone game for review. I also approved the in app purchase for my game (and provided the in app purchase screenshot). It's a few days later and the app status has been updated from "waiting for review" to "in review"... while the in app purchase status is still "waiting for review". Could the app go live in the store before the in app purchase? If so what happens when the user tries to make the in app purchase? Anything the developer can do to remedy this? |
37 | What technologies are used to develop games for tablets and iPhones? What technologies are used to develop games for tablets and iPhones? Would Flash be commonly used? Is HTML5 capable of animation and interactivity, or would a game file simply be placed within HTML5 code? Is Flash and HTML5, and then wrapping the game for different phone tablet operating systems a common combination? |
37 | How to know when to use UIViews and other Quartz2d features and when to use OpenGL ES? When starting a new 2d project on the iOS framework, it seems pretty to start creating UIView subclasses and having fun with Quartz to create very quick prototypes, but I am guessing there will be times where it will be impractical(or impossible) to use such powerful(and wasteful compared to openGL ES) abstractions. I know a couple dozen subviews work allright, but there must be a point where having more than X is just impossible to keep a good frame rate. How do you know if it would be a good idea to drop the UIKit and start doing some more powerful stuff on openGL? |
37 | RTS game engine for iOS iPhone Anyone know any free or non free RTS engine that can be used for iOS game development? |
37 | .NET WCF as backend server for iPhone games I'm planning to develop an iPhone game. Is it possible to use .NET WCF as a backend? The server would be windows 2008 and MS SQL database. |
37 | velocity question from book iPhone I am reading an iphone game dev book and I have a question about velocity. mainly it is this line playerVelocity ccp(playerVelocity.x dec acceleration.x sens, 0) Why do you mutiple the playerVelocity.x by the deceleration. the book says it works by reducing the current velocity so it is easier to change direction and then add the ( acceleration.x sens. ) this piece of code is using the accelerometer. float dec 0.4f lower quicker to change direction float sens 6.0f higher more sensitive float maxVel 100 playerVelocity ccp(playerVelocity.x dec acceleration.x sens, 0) NSLog( " ", NSStringFromCGPoint(playerVelocity)) if(playerVelocity.x gt maxVel) playerVelocity.x maxVel else if(playerVelocity.x lt maxVel) playerVelocity.x maxVel |
37 | Calling CCRepeaForever on CCLabelTTF Makes Label Not Show Up I have a CCLabelTTF in the top right hand corner of the view. (void)scrollScoreLabelAndMonkey scoreLabel CCLabelTTF labelWithString "0000" fontName "Marker Felt" fontSize 30 scoreLabel.position ccp(420,300) scoreLabel.color ccGREEN self addChild scoreLabel I simply add this code to the bottom of the above method. However as soon as i add this line, the label doesnt show up at all. id moveScore CCMoveBy actionWithDuration .7 position ccp(scoreLabel.position.x 10, 0) CCRepeatForever scrollScoreLabel CCRepeatForever actionWithAction moveScore scoreLabel runAction scrollScoreLabel Any help is greatly appreciated! |
37 | Detecting End of Animation So I am making a death animation for a game. enemy1 is a UIImageView, and what I'm doing is when an integer is less than or equal to zero, it calls this deathAnimation which only happens once. What I want to do is use a CGPointMake right when the animation is finished being called. Note that before the deathAnimation is called, there is another animation that is constantly being called 30 times a second. I'm not using anything like cocos2d. if (enemy1health lt 0) self slime1DeathAnimation How can i detect the end of this animation This is how the animation is done (void)slime1DeathAnimation enemy1.animationImages NSArray alloc initWithObjects UIImage imageNamed "Slime Death 1.png" , UIImage imageNamed "Slime Death 2.png" , UIImage imageNamed "Slime Death 3.png" , UIImage imageNamed "Slime Death 4.png" , UIImage imageNamed "Slime Death 5.png" , nil enemy1.animationDuration 0.5 enemy1.animationRepeatCount 1 enemy1 startAnimating If you need more code just ask |
37 | Need ideas on how to give my levels structure I am making an iOS game for a project at school. It is going to be a tiny bit like Fruit Ninja, as in it will have different things on the screen, and when you hit them, they die, and you get points. The trouble is that unlike Fruit Ninja, my game will have different types of sprites, all doing different things (moving different places, doing different things, etc). The one thing that is bad about having all of these sprites that do different things is that it is hard for them to look neat on the screen all together. I was planning on having a couple of different gamemodes Time Trial You have 120 seconds to kill as many sprites as possible. Survival You have three lives, every time you try to hit a sprite and miss, you lose a life. ???? Whatever I think of. I am a rookie to game design in general, and I don't know the best way to make my game look good, and play well. I could have all of these sprites on the screen at the same time, or I could have them come in waves, for example 10 of sprite a come on, and once they are killed, 10 of sprite b come on, etc... Please give me your opinion about which one I should code. If you have any other suggestions for either a third gamemode, or a completely different way to make the levels, feel free to tell me. |
37 | On cocos2d, what is better one monolithic game cycle or many scheduled selectors? I am building a simple tower defense game, and I have to take a decision between two(or more, feel free to suggest other ways) to do the game cycle. Option 1 Monolithic game cycle I could have a global scheduled selector that calls update for every sprite object logic thing that I need to calculate. I could achieve this by having a singelton Game class that has a mutable array of objects and then on the Scene(or that singelton) I could have a setup with self schedule selector(gameCycle ) interval 0.1 And then on that gameCycle do something like (void) gameCicle (ccTime)dt for(Entity e in Game sharedGame entities ) e gameCicle Second option Each object could create its own selector init. This is almost the same, but instead of a singleton, every time that i create a object I could create a new scheduled selector as such (id) initWithTexture (CCTexture2D )texture rect (CGRect)rect if( (self super initWithTexture texture rect rect )) CCTouchDispatcher sharedDispatcher addTargetedDelegate self priority 0 swallowsTouches YES self schedule selector(gameCycle ) interval 0.1 return self Now I don't know what is better(or if there is a very big difference in performance). What is the usual best approach for cocos2d? |
37 | i want to move arrow by dragging and than want to toss throw when i give swipe event In My Application I Have One Arrow Image With Fix Center and i Can rotate it by touch move method on same center but i also need to Toss trow the arrow after doing some rotation with dragging or without doing that How Can I Put The Toss Throw Action in MyApplication Thank You Very Much In advance |
37 | Set texture of a LHSprite that is loaded from LevelHelper How do i set the image texture of an LHSprite that is loaded into xCode using levelHelper amp spriteHelper? I am using sprite sheets. So i tried to load the image the old fashioned way using CCSpriteFrameCache, but unfortunately it isnt recognizing the .pshs file ( Any Help would be great! |
37 | Detecting right left collisions with a bounding box I'm building a platformer in cocos2d, my first game project. I'm working on movement and collision detection. I'm using a tilemap with a "meta" layer of invisible blocks that are designated collidable. Everything seems to work pretty well except for one minor detail when the character jumps from above into a platform, he does not fall but clips through it. If he jumps from below, it recognizes it and resets his Y velocity accordingly. I know that it's because of how I'm detecting the collisions, and just doing a simple comparison of the origin.y's, but I'm unsure of how to refactor this code better (void) update (ccTime) dt CGPoint scaledVelocity ccpMult(leftJoystick.velocity, 35.0f) CGPoint newPosition ccp(player.position.x scaledVelocity.x dt, player.position.y) CGPoint oldPosition player position float velocityX player.Velocity.x scaledVelocity.x dt float velocityY player.Velocity.y BOOL isCollision NO BOOL isCollisionBelow NO BOOL isCollisionRight NO BOOL isCollisionLeft NO NSMutableArray collisionSprites NSMutableArray alloc init if (oldPosition.x gt newPosition.x) player.flipX YES else if (oldPosition.x lt newPosition.x) player.flipX NO for (CCSprite sprite in gameplayLayer.collidableTiles) if (CGRectIntersectsRect(player.boundingBox, sprite.boundingBox)) isCollision YES collisionSprites addObject sprite Figure out which direction collision is in if (isCollision) for (CCSprite collisionSprite in collisionSprites) if (collisionSprite.boundingBox.origin.y lt player.boundingBox.origin.y) isCollisionBelow YES if ((collisionSprite.boundingBox.origin.x gt player.boundingBox.origin.x) amp amp (collisionSprite.boundingBox.origin.y gt player.boundingBox.origin.y)) isCollisionRight YES if ((collisionSprite.boundingBox.origin.x lt player.boundingBox.origin.x) amp amp (collisionSprite.boundingBox.origin.y gt player.boundingBox.origin.y)) isCollisionLeft YES if (isCollisionLeft) velocityX 0.5f else if (isCollisionRight) velocityX 0.5f if (isCollision amp amp !isCollisionBelow amp amp player.characterState ! kStateFalling) velocityY 0 velocityY 0.8f player setCharacterState kStateFalling else if (!isCollisionBelow) velocityY 0.7f else if (isCollisionBelow) velocityY 0 player setCharacterState kStateIdle if (rightButton.active) if (player.characterState ! kStateJumping) velocityY 15.0f player setCharacterState kStateJumping Ground friction if (player.Velocity.x gt 0) if (player.Velocity.x lt 0.5f) velocityX 0 else velocityX 0.5f if (player.Velocity.x lt 0) velocityX 0.5f player setVelocity ccp(velocityX, velocityY) player setPosition ccp(player.position.x player.Velocity.x, player.position.y player.Velocity.y) gameplayLayer setViewpointCenter newPosition player setOldPosition player.position How can I refactor this code to work properly from all angles and reduce the complexity (I feel that there's too many booleans, and they all get a bit confusing). |
37 | How to load a text file from a server into iPhone game with AS3 in Adobe AIR? Im creating an iPhone game with Adobe AIR, and I want to be able to load a simple text msg into an dynamic text box on the games front screen from my server (and then be able to update that text file on the server, so it updates automatically in the game after the game is on the app store) How would I go about acheiving that? is it as simple as using a getURL? are there any specifical issues with trying to do this on the iPhone via AIR that I should be aware of? Thanks for any advice. |
37 | iPhone Game framework on top of cocos2d? I am using cocos2d to develop iPhone game, however, it is just an 2D engine and developer is too flexible to use them i.e. codes tends to be unmanageable. Are there any more high level framework for game development, perfer using cocos2d? |
37 | How to test a network game on a mobile device I am looking to test a network game with say, 10 or more users, on a mobile device. Think of it like an MMO with a lot less users. (Assuming I can get 10 running, I'd test for more users later) The test is for two reasons To see if and how the server will handle that many users To see the performance on the client side (seeing how it has to update render all these objects on a mobile device) How would I go about testing 10 or more users? Unfortunately I do not have 10 devices to test on, only one. I've seen a few MMOs available on Android iPhone. How do these developers go about test their game on devices? |
37 | create a simple tilemap programatically i am working on a tile roguelike i got some of the basics working using this tutorial http www.raywenderlich.com 1163 how to make a tile based game with cocos2d but i want to be able to create a map programatically instead of using the tiled editor just add a NSArray of sprites tiles to my CCLayer derived class or something, or should i make use of the CCTMXTiledMap class? can anyone gimme some hints about how to do this? or know of a tutorial or sample code somewhere? thanks a lot |
37 | What are some low level performance tweaks for iPhone games written in c? I'm interested in some performance tweaks for a relatively simple OpenGL ES based 2D iPhone game. What performance tweaks have you found? The performance of the game is pretty good on most devices (3G, 2nd gen touch, and iPhone 4)... But I still want to give the user the best experience possible. p.s. I'm still using OpenGL ES 1.1 |
37 | iPhone 3GS can't ever seem to hold 60fps with CADisplayLink So I've switched from NSTimer to CADisplayLink and I'm still seeing unexpected variation in my frame counter it fluctuates between 59 60fps, even when I'm not rendering much. Has anyone else seen this? Is this an expected variation in iOS? Or should I look more closely at my game loop? |
37 | How to sync game tick in peer to peer game? I am making a 2 player iphone action game using a synchronization service (in this case Firebase). The service allows state syncing through the internet, but I have to execute all game logic on the phones. I've read a bunch of game networking articles, and since I have no server, it seems like if I can get both phones to start the game at "exactly" the same time in realtime, I can adjust everything from there. (Planning on using a "tick" count on each phone, and sending the tick along with actions, then rewinding the simulation if necessary. The phones divide the game into halves to determine authority). But how do you get remote clients to start a game tick at the same time?? It seems like this would be necessarily in lock step peer to peer, and something like it in FPS games. The strategy I'm using now tries to have one client "guess" the other client's clock time, including ping time. If correct, they both start the game based on the guess (since for that message it was on). a sends a.localTime b receives message. Returns b.localTime, and dTime(a b) a receives message. Returns a.localTime, and dTime(b a) b receives message. If a.localTime dTime(b a) is very close to new b.localTime, send accept message. b starts game at new b.localTime 1 second a receives accept. Starts game at accepted a.localTime dTime(b a) 1s I made a little number counter thing to go with the tick, and they're just barely off. I can't see how this is possible though, since A "guessed" B's time, they should start at exactly the same time... See anything wrong with my method? What's an easier way? I'm trying to avoid using a server if possible. |
37 | How do I open the camera in a CCScene node in a Cocos2D application? I am new to Cocos2D and I can't find the way to open the camera from the library. I want to open the camera in the my game, can any one help me with with this? Thank you in advance. |
37 | What to think about when designing a simple GUI for a quiz game I am coming close to finish my first iPhone game ever, as a matter of fact also my first programming experience ever, which is a quiz game. I have all the functionality i want and is currently polishing it both from a code point of view as well as looking at the GUI. My initial idea was not to use any specific graphics but rather focus on the game experience and simplicity and by that only using background color, orange, and white text as well as buttons. The design is based on that all ages, from learning to read, should be able to host and play this game. However, as i am now getting close to the finish line i am starting to think what is needed from a GUI point of view. I would like to ask for some advice what to think about when designing a GUI. Is it considered OK without any 'fancy' graphics, what is the risk without it etc.? Also, what colors goes well together if i choose to use a simple GUI. I am thinking about color blindness etc. In other words how do i design a good and effective GUI for a simple game as mine? Thanks |
37 | Map file format for Real time strategy game I am planing to create a 2D RTS game on iPhone using cocos2d, besides the tile map, any other suggestion for storing the map details for the game? Would be nice if a GUI editor is also provided. |
37 | Game state from Web to iPhone version? I'm currently in the planning stages for an Web and iPhone game. I'm developing it with Adobe AS3 Air. I was wondering if it's possible for people to be able to play the Web version, save their state of play and then pickup again where they left of on the iPhone version? (and visa versa) how that would be achieved? the Web version will probably be on Facebook, so could I link both versions through their FB UID? |
37 | iPhone, iPad Rendering Ahead of Input (4 5 seconds) UPDATE I think I solved this one, though any thoughts or suggestions are always welcome! This one is truly bizarre... I'm developing using both an iPhone 3GS and an iPad 2. This symptom is repeatable on the older iPhone hardware much more readily than on the iPad 2. Here's what happens I start the game and the FPS is ticking at about 47 on the 3GS in OpenGLES2.0. I have quite a lot on screen right now, so that is reasonable at this stage (I still have a few optimizations I can make too). If you put a break in touchesBegan, it takes roughly 4 5 seconds before the break occurs and the game halts. This doesn't happen all of the time, and in fact, if you restart the app several times, it usually goes away. Moreover, if I let the game sit for a bit, it will eventually "sync." Like I said, the iPad 2 rarely exhibits the problem (one out of every 15 20 test runs), but it is definitely still present. If you perform a series of touches and accelerometer events in sequence, they are queued up and rendered just like you did them 4 5 seconds later. Very reminiscent of 4 5s lag delays back in the day online ) I've tried a couple things like glFlush, or glFinish at the end of each render loop (doesn't seem to have any effect). Here's the top game loop (void) drawView (CADisplayLink ) displayLink double dt 0.0f Time in seconds since last loop if (displayLink ! nil) dt displayLink.timestamp m timestamp m timestamp displayLink.timestamp Update the game m gameEngine gt Update(dt) Render the game m gameEngine gt Render() m context presentRenderbuffer GL RENDERBUFFER UPDATE The more I think about it... this could be a loading issue. Right now, I instantiate my main controller class for the game loop in the (id) initWithFrame (CGRect) frame call, which is also where I initialize OpenGL and (finally) call displayLink addToRunLoop NSRunLoop currentRunLoop forMode NSDefaultRunLoopMode . It's almost as if the renderer gets behind or ahead of the game logic and it takes it awhile to get caught back up. From research, I could also just be throttling the CPU too hard. But if that were the case, I would imagine it wouldn't catch up... the FPS stays at a solid 44 46, and the input lag does go away if you let it sit for 10 15 seconds (with or without doing anything, it eventually catches up). I'm stumped, but I'll keep digging... UPDATE 2 Doing some profiling... figured out that the accelerometer being polled 1 60 was a bit taxing. If I moved it to 1 30, this problem went away. This is concerning to me, as it likely suggests I do have a CPU overloading issue that will only be solved through some serious optimizations across the board (which is okay, I knew I had to do it anyway). For anyone with a similar issue, especially if you are using the accelerometer, make sure you disable it if when you aren't using it. I only use the accelerometer to allow the player to rotate move the camera so when the player has the camera locked, I am now toggling it off. This has solved my problem for now, but like I said, I probably only solved it temporarily. A real solution is going to take more work! Hope this helps. |
37 | Can I start implementing Game Center in my iPhone app, without having created a new app in iTunes connect? I'm in the process of developing a game for the iPhone and I want to add Game Center support to it. The problem, as I see it, is that I need to have named my app, created an icon and uploaded screenshots etc. before I can create a leaderboard and start implementation? My game is unfinished and the iTunes Connect developer guide seems to indicate that certain information can't be edited once entered. Can anyone point me in the right direction on this one? Thanks. |
37 | 8 bit Game To pre scale images, or post scale my Cocos2d scene. Which is the better approach? I'm wanting to develop a game with an 8 bit feel. Since this game is mostly for my personal enjoyment, I've set a requirement that I want my game to have an 8 Bit feel to it that is most similar the 8 Bit systems of yore, on a lovely CRT TV. This is also my first Cocos2d project so I don't know which approach is better. I'd like the general advice of the community on how to render my game. I've come up with two possible approaches. Please advice which is better. Approach 1 Create images for backgrounds and sprites using the "8 bit" resolution of my game. After I'm done making an image, I scale it up and I save two variants the standard sized one and the " 2x" variant. When I render the game, everything looks very 8 Bit but I will still need to apply mathematics to every animation so that my virtual pixels always shift by 4 or 8 pixels (for retina displays) on the screen when they move. This approach doesn't sound difficult but it seems like it could become a bit annoying and tedious. Approach 2 Generate all content in my image designer at the exact resolution of my virtual screen. When I generate my scenes in Cocos2d, I generate them all with the "virtual resolution". When the scene is displayed on the iOS screen, I scale the entire scene up to the resolution of the device. I think this could be the simplest approach since all of my sprite and background movement mathematics can just behave as normal. However, I'm not sure that this approach is even possible. Furthermore, I know that within iOS, graphic scaling has an anti alias affect that gets applied to graphical objects as they scale up. Obviously, I'd want to cut this affect off and just use a "nearest neighbor" algorithm to scale up my scene. If I can't cut off the scaling anti aliasing, I don't want to use this method. However, if I can setup such a rendering system, I think this approach would be best. So, my question is simple. Can I use approach 2 and, if so, which approach is really easier to work with in Cocos2d and iOS? |
37 | Looking for the reference tutorials for the joints in the Box2D for iphone I can't find the tutorials of joints class in the Box2D for iPhone. I am unable to run a Testbed for iPhone Box2D. (void)ccTouchesBegan (NSSet )touches withEvent (UIEvent )event if ( mouseJoint! NULL)return UITouch mytouch touches anyObject CGPoint location mytouch locationInView mytouch view location CCDirector sharedDirector convertToGL location b2Vec2 locationWorld b2Vec2(location.x PTM RATIO,location.y PTM RATIO) ristrict the player within the ground limit keep stucking the player with grounditself..... if ( playerFixture gt TestPoint(locationWorld)) b2MouseJointDef md md.bodyA groundBody md.bodyB playerBody md.target locationWorld md.collideConnected true md.maxForce 100.0f playerBody gt GetMass() mouseJoint (b2MouseJoint ) world gt CreateJoint( amp md) playerBody gt SetAwake(true) (void)ccTouchesMoved (NSSet )touches withEvent (UIEvent )event if ( mouseJoint NULL) return UITouch myTouch touches anyObject CGPoint location myTouch locationInView myTouch view location CCDirector sharedDirector convertToGL location if (location.y lt 240.00 amp amp location.y gt 20.0f) b2Vec2 locationWorld b2Vec2(location.x PTM RATIO, location.y PTM RATIO) mouseJoint gt SetTarget(locationWorld) (void)ccTouchesEnded (NSSet )touches withEvent (UIEvent )event if ( mouseJoint) world gt DestroyJoint( mouseJoint) mouseJoint NULL |
37 | Corona SDK Animation takes a long time to play after "prepare" step First off, I'm using the current publicly available build, version 2011.704 I'm building a platformer, and have a character that runs along and jumps when the screen is tapped. While jumping, the animation code has him assume a svelte jumping pose, and upon the detection of a collision with the ground, he returns to running. All of this happens. The problem is that there is this strange gap of time, about 1 2 a second by the feel of it, where my character sits on the first frame of the run animation after landing, before it actually starts playing. This leads me to believe that the problem is somewhere between the "prepare" step of loading up a sprite set's animation sequence and the "play" step. Thanks in advance for any help ). My code for when my character lands is as follows local function collisionHandler ( event ) if (event.object1.myName "character") and (event.object2.type "terrain") then inAir false characterInstance prepare( "run" ) TODO time between prepare and play is curiously long... characterInstance play() end end |
37 | What are the restrictions of 3g online games I am looking into make a 3g online multiplayer game for the iphone. Multiplayer is my main focus but I have noticed all game apps require wi fi. Dose anyone know if this is simply an issue with the speed of the 3g network or dose apple put restrictions on their 3g network that prevents developers from doing this? |
37 | How do I allow a player to manager several games at once with different players using game center? I wish to do a turn based multiplayer game like Words with friends on the iphone. I would like to know the approach for the implementation to allow the user to continue games that are currently still active even after a couple of days. Is this even possible using Game Center? |
37 | How to connect two Iphone ipod ipads for gameplay over local wifi network? I would like to add the option to play against another person over my wifi network in my pong game. How do I make this happen? Also, I would like to know how to be able to have 4 players at the same time like I would if I make a monopoly type game or something similar. I'm a newbee so please point me in the right direction or show me some code I can use for this and study. Thanks guys David |
37 | Best way to learn iphone game development? I know php and some python, but not much c. Where should I start to learn iphone game development? Is there some recommended books tutorials for beginners? I'm looking at using cocos2d but I'm open to anything that isn't too limited. |
37 | Packaging HTML5 games as applications for iPhone Android Is it possible to package HTML5 game for iPhone and Android as an application or does it have to be accessed through a browser? |
37 | Which API for cross platform mobile audio? This question focuses on the API's available on phones. I'd been planning to use OpenAL in my game for maximum portability. It runs great on Linux so I can quickly iterate while developing as well as leverage the desktop's superior debugging tools. However I've recently heard that Android doesn't support OpenAL well. Instead they've gone with an OpenSL ES library. What I'm looking for is a free Audio library that I can use with minimal custom code on iPhone, Android, and my Linux desktop. Does such a library exists? Some extra details The game is written in C with custom minimal front ends, e.g. ObjC for iPhone, Java for Android, and SFML for desktops. I'm using OpenGL ES for portability as iPhone doesn't support the more advanced OpenGL APIs. |
37 | Minimum Hardware for iPhone Dev So I'm planning to get into iPhone dev (2D games). I understand that my only real option is to get a Mac. I'll probably go the MonoTouch route. I'm not sure what hardware to buy I just want the minimum that'll be "good enough" to develop with. Some posts recommend a Mac Mini, although it seems on the more expensive side even on the iPhone front, I'm not sure what to look for. Also, I may skip on the iPhone until I'm ready to launch my first game, since I can get away with the MonoTouch evaluation version. (Albeit that it's probably not the best idea.) |
37 | Game uses gamecenter in iphone can I design a fallback for 3G and prior devices? If I develop a game using gamecenter, does itunes connect lock me in to only supporting 3GS and above with iOS 4.0 and above, or will it still allow sales of my game on older devices (as long as I build in fallback so it never calls gamecenter framework)? |
37 | iPhone 3d Model format .h file, .obj, or some other? I'm beginning to write an iPhone game using OpenGL ES and I've come across a problem with deciding what format my 3D models should be in. I've read (link escapes me at the moment) that some developers prefer the models compiled in Objective C .h files. Still, others prefer having .obj as these are more portable (i.e., for deployment on non iPhone platforms). Various 3D game engines seem to support many(?) formats, but I'm not going to use any of these engines as I would like to actually learn OpenGL ES. Am I putting myself at a disadvantage here by not using a packaged engine? Thanks! |
37 | Can we develop a game for iPhone on Windows platform? Is it possible by any means to have a game developed for iPhone using the iPhone sdk on Windows? |
38 | Speed, delta time and movement player.vx scroll speed dt Update positions player.x player.vx player.y player.vy I have a delta time in miliseconds, and I was wondering how I can use it properly. I tried the above, but that makes the player go fast when the computer is fast, and the player go slow when the computer is slow. The same thing happens with jumping. The player can jump really high when the computer is faster. This is sort of unfair, I think, because. Should I be doing this someway else? Thanks. |
38 | How can I avoid a busy wait in this game loop implementation? I've been developing a small framework for OpenGL and WinApi for some research purposes. My biggest problem right now is the game loop. Simplified I did something like this Main thread HANDLE hUpdate (HANDLE) beginthreadex(0, 0, updateThread, 0, 0, 0) while (!done) if (PeekMessage( amp msg, hWnd, 0, 0, PM REMOVE)) if (msg.message WM QUIT) done TRUE else TranslateMessage( amp msg) DispatchMessage( amp msg) else Draw() Update thread unsigned int stdcall updateThread(void params) QueryPerformanceCounter( amp nextTick) QueryPerformanceCounter( amp lastUpdate) while (!done) LARGE INTEGER currentTick QueryPerformanceCounter( amp currentTick) while (currentTick.QuadPart gt nextTick.QuadPart) double frameTime ((currentTick.QuadPart lastUpdate.QuadPart) 1.0) clock.QuadPart QueryPerformanceCounter( amp lastUpdate) Update(frameTime) nextTick.QuadPart TIME STEP clock.QuadPart QueryPerformanceCounter( amp currentTick) return 0 This way i got fixed time step while draw will be called as much as it can (kinda what I was looking for). The BIG problem is that everything is operating using busy wait, and it just swallows CPU and still no real game logic has been written. Can anyone tell me, or redirect me to some good, efficient implementation of game loop? While I was researching I came across information that Sleep() is bad for game loop. |
38 | JavaFX AnimationTimer VS Swing Game Loop After looking at some code sources out there I noticed Java Swing Games usually create a class implementing Runnable, create a new Thread and set up the game loop in the run() call. But JavaFX games seem to simply extend from Application and run the game loop in a new AnimationTimer() ... public void handle() ... What gives? |
38 | What type of loop code on game engines? Recently I worked on a game on Spritekit Engine. My question is not about spritekit, but generaly about game engines. When I write a loop code and run it (eg while i lt 100000) my CPU usage goes to 100 , but when I run the test game there is no changes specially on CPU usage ,why is this so? (We know game engines runs in a loop that includes logic and graphic commands ) |
38 | Turn based Strategy Loop I'm working on a strategy game. It's turn based and card based (think Dominion style), done in a client, with eventual AI in the works. I've already implemented almost all of the game logic (methods for calculations and suchlike) and I'm starting to work on the actual game loop. What is the "best" way to implement a game loop in such a game? Should I use a simple "while gameActive" loop that keeps running until gameActive is False, with sections that wait for player input? Or should it be managed through the UI with player actions determining what happens and when? Any help is appreciated. I'm doing it in Python (for now at least) to get my Python skills up a bit, although the language shouldn't matter for this question. |
38 | Game Logic Update Order Is there a commonly accepted general approach to the order of processing logic updates? My current 2D platformer has objects that implement different concerns, including the following Notifiable can be event driven and scripted Collidable can interact with solid tiles (eg NPCs) Intersectable exists in 2D space and can intersect with the player (eg doors) In general the order of events in a game loop is Get input Act on input and update stuff Render I'm not sure what order to do things in point 2. I've concluded moving things like moving platforms that the player can stand on first is a good idea, but I don't know when to consider scripting. The scripting in my game can give NPCs behaviours (walk between here and there), suspend player input for 'cutscenes', show dialog screens, and move the player to a new level. |
38 | NodeJS setTimeOut How to run callback before delay time exceeded I'm developing a card game server. I want to do this While server process a turn for players, players have 20 seconds to do something. If players send a request to server within 20 secs, timer will stop and the callback will fire. I'm doing like this self.tables table.id .currentTimer setTimeout(function () callback() , 20 1000) How should I run the callback before delay time exceeded ? |
38 | Pro's Con's of separating game logic and render threads Originally, I have thought that it is good practice to separate my game logic (updating) from my rendering thread. In this threading model, the rendering thread has no limitation on frame rate and simply draws whatever information is currently made available by the updating thread. On the other hand, the updating thread is monitored, and has a capped frame rate. I'm led to believe this is BAD and will lead to many coding struggles down the road... So, I'm wondering what are the POTENTIAL benefits for separating updating from rendering? Likewise, what are the POTENTIAL benefits gained when keeping the updating and rendering in the same thread? What are the losses for each method? |
38 | Delta times and frame lag in the game loop Let's say we have a standard gameloop like this, in pseudocode while (true) dt GetDeltaTime() Update(dt) Render() Here Update(dt) either uses a true variable timestep, or it determines how many cycles of a fixed timestep physics loop to execute based on dt. Now say we have the common case where we have mostly constant framerate except for infrequent single frame hiccups, so let's say we have dt values like 1 60, 1 60, 1 60, 1 6, 1 60, 1 60, ... By the time our GetDeltaTime() detects the larger timestep in the fourth frame, we have already rendered and presented the fourth frame! So one frame will already have been rendered with a wrong (too small) timestep no matter what we do. So if we now use the larger dt 1 6 to render the fifth frame, my understanding is that we artificially create a second frame where a wrong timestep is used, this time a too large one. I wonder if this problem is acknowledged anywhere. Wouldn't it be better, say, to use the averaged dt over the previous few frames to combat this? Here are some pictures to illustrate what I mean. I use the example of an object moving along a fixed axis with a constant speed, and using a variable timestepping scheme. The problem is essentially the same with fixed timesteps, though. The plots have time on the x axis, and the object position on the y axis. Let's say the object moving at 1 unit s, and framerate is 1 Hz. This is the ideal situation. Now let's say we have a frame where the time interval is 2 instead of 1. With a classical dt based scheme, we get this So we have one frame where the velocity is perceived too low, and one where it is perceived too high and which corrects for the velocity in the previous frame. What if we instead, say, always use a constant (or very slowly changing) dt? We get this The perceived velocity seems smoother using this approach to me. Of course, the object position is now not the "true" one, but I think humans perceive abrupt changes in velocity more clearly than such small positional offsets. Thoughts? UPDATE At least Ogre can do this http ogre.sourcearchive.com documentation 1.6.4.dfsg1 1 classOgre 1 1Root 1f045bf046a75d65e6ddc71f4ebe0b2c.html So I guess I just got downvoted for people not understanding my question, which is rather frustrating. |
38 | Varying framerate (FPS) In my game loop, I am using fixed time step for physics and interpolation for rendering as suggested on Gaffer on Games Fix Your Timestep! However, when the framerate is varying between 30 60fps during the game, the game looks jumpy. For example, balls suddenly look accelerated when the frame rate increases from 35 to 45 suddenly. Is there a way to make the game look smooth while framerate is varying? Here is my game loop protected void update(float deltaTime) do some pre stuff deltaTimeAccumulator deltaTime deltaTimeAccumulator is a class member holding the accumulated frame time while(deltaTimeAccumulator gt FIXED TIME STEP) world.step(FIXED TIME STEP, 6, 2) perform physics simulation deltaTimeAccumulator FIXED TIME STEP world.step(deltaTime, 6, 2) destroyBodiesScheduledForRemoval() render(deltaTimeAccumulator FIXED TIME STEP) interpolate according to the remaining time Here is the part related to the interpolation (related inner works of render() method) this.prevPosition this.position get previously simulated position this.position body.getPosition() get currently simulated position interpolate Vector2 renderedPosition new Vector2() if (prevPosition ! null) amp amp !isFloatApproximatelyEquals(this.prevPosition.x, this.position.x) amp amp !isFloatApproximatelyEquals(this.prevPosition.y, this.position.y)) renderedPosition.x this.position.x interpolationAlpha this.prevPosition.x (1 interpolationAlpha) renderedPosition.y this.position.y interpolationAlpha this.prevPosition.y (1 interpolationAlpha) else renderedPosition position Draw the object at renderedPosition |
39 | Find extreme points of a rotated ellipse function on a given axis How to find the points where it is most extreme on the X and Y axis? For example lets say I have an equation that describes an ellipse that is rotated (x RadiusX Rx y RadiusX Ux) 2 (x RadiusY Ry y RadiusY Uy) 2 RadiusY 2 How can I find the points where it will be most extreme on each axis Please keep in mind the values for variables RadiusX, RadiusY, Rx, Ry, Ux, Uy are known. An example with values ((x 1 0.70711) (y 1 0.70711)) 2 ((x 1.414213 0.70711) (y 1.414213 0.70711)) 2 1.414213 1.414213 |
39 | Puzzle game clipping of non convex polygons on the figures I have an original figure in the SVG file. I need to break it randomly on parts, an example of which is specified below I have an idea, to use for partitioning Voronoi diagram (Fortune's algorithm). Can I then change the line of intersection of figures to give them a curvature? Can be obtained to identify each shape as a path? To create a game I'll be using cocos2d x. |
39 | How can you tell if you are a large player on a large map or a small player on a small map? If everything is scaled by a constant factor, can you tell that the world is smaller or larger? I think you could look down at the ground and see that it's "closer". But how do you know what should be the correct distance? What visual clues give it away? Edit The player controller has an FPS camera! |
39 | Find projecting triangle for UV mapping in RuneScape model format I am using an old Runescape model format, also used by Thief and Quake. In this format, instead of specifying UV coordinates for each vertex ABC, we specify a second trio of vertices PMN. Those vertices are then used to project UV texture coordinates onto ABC. Some previous Q amp A explains this projection algorithm and how to reverse it. I have a mesh with UVs that I want to save in this format. To do that, I want to find a trio of vertices PMN for each triangle ABC that reproduce the correct UVs. These PMN vertices are chosen from the collection of vertices already in my mesh. I could search every possible ordered triangle in my mesh, but that scales as O(n 3) and would be impractical for meshes with high vertex counts. How can I more efficiently find a PMN triangle that produces my desired UV coordinates on each triangle ABC? |
39 | How can I find the reflection of a point in a perpendicular line? From the image below I know the positions A and B. How can I find positions C and D and the reflection (Er) of an object E in the line CD? I saw the solution in How can I reflect a point about a line in Unity? but I don't understand it enough to apply it in my case. Please note that I'm trying to achieve this in 2D. |
39 | What shapes interlock solidly like cubes? What basic geometric shapes, besides cubes, interlock "minecraft" style like cubes? A small edit to make question more suited to Game Dev I am experimenting with game dynamics to create geometric solids interlock seamlessly on their faces much the way the cubes do in minecraft. The objective is to create an interlock system that is also memory efficient consistent enough to maintain interlock over the course of many connections without deviating from the main grid. |
39 | exact point on a rotating sphere I have a sphere that represents the Earth textured with real pictures. It's rotating around the x axis, and when user click down it has to show me the exact place he clicked on. For example if he clicked on Singapore the system should be able to understand that user clicked on the sphere (OK, I'll do it with unProject) understand where user clicked on the sphere (ray sphere collision?) and take into account the rotation transform sphere coordinate to some coordinate system good for some web api service ask to api (OK, this is the simpler thing for me ) some advice? |
39 | Calculating specific coordinate along a path Let's say I have a path comprised of a sequence of points that are connect by lines and arcs. The entire path has some specific length. Let's call that length 100. What would be the mechanism to determine the exact x,y coordinate for any given point on that path? meaning... Path path new Path() fill in various lineTos and arcTos to the path here .... Point p getCoordsFromPath(path, 50.0f) get the coords at the half way point on the path Point getCoordsFromPath(Path inP, float position) Point ret do some magic and get the x,y at the point in the path that is the position distance from start return ret TIA |
39 | which quarter of a triangle the point is in I've got a world made out of squares. The square are devided in four triangles like this The corners have their heights stored in a 2D array and the center height is the average of the corners. To calculate the players Y coordinate I have to know in which triangle he is standing on. How can you calculate in which quarter the player is by the X and Y coordinates? EDIT My algorithm bool h1 false, h2 false if(posZ lt posX posZ) something weird going on here h1 true if(posX lt posZ posX) and here h2 true if(h1 amp amp h2) triangle 0 if(h1 amp amp !h2) triangle 1 if(!h1 amp amp !h2) triangle 2 if(!h1 amp amp h2) triangle 3 This algorithm tries to find in which triangle you are. There is something wrong in that math. |
39 | Intersection points of plane set forming convex hull Mostly looking for a nudge in the right direction here. Given a set of planes (defined as a normal and distance from origin) that form a convex hull, I would like to find the intersection points that form the corners of that hull. More directly, I'm looking for a way to generate a point cloud appropriate to provide to Bullet. Bonus points if someone knows of a way I could give bullet the plane list directly, since I somewhat suspect that's what it's building on the backend anyway. |
39 | Finding the position of a point inside a triangle, based on the position of a point in another triangle (barycentric coordinates in Godot) I'm trying to translate a UV coordinate into a world coordinate. (this is not in a shader, it's in GDScript in Godot, to bake out a texture based on some raycasts). So I find out which triangle in the UV map the coordinate is in and then I find the corresponding triangle in the mesh and get it's world position, that's all good. But then I can't figure out how to translate the points relative position inside the UV triangle to the mesh triangle. I've been trying to use the distance from the point to each of the triangle points as weights to how much of each of the mesh points I should use to find the point I'm looking for, but having no luck. Any help would be much appreciated. Not sure how best to illustrate this. But imagine you had a triangle where you knew the corner points a, b, and c, and a position inside it p. And you had another triangle where you knew the corner points A, B and C, how would you find a point that was relative in the same as p is to abc in triangle ABC. EDIT I added an answer based on the comments from DMGregory |
39 | How to know if two surface are in the same direction? In my code I creat a Mesh that are composed by multiple tile. those tile can have edge that are shared, and I need to know if the normal of the tile that have shared edge have the same direction, because I calculat the normal of the edge to smooth the light. before I used the Vector.Dot() , but I've sometimes two tile that have a curve that make impossible to use this function as you can see in the 4th image. and the only information that i have in the calcul is the 2 normal of the tile. img 1 diferent direction img 2 diferent direction img 3 same direction img 4 same direction img 5 same direction |
39 | Procedural Geometry Generation I have recently been looking into SceneKit for OS X and noticed that there are several factory methods to create geometric shapes such as Box, Capsule, Cone, Cylinder, Plane, Pyramid, Sphere, Torus and Tube. I am interested in adding such primitive shapes to my renderer but am struggling to find any reasonable source from which I can gather an understanding of procedural generation. There are several resources which detail the theory, but lack the appropriate source code to back it up. SceneKit provides factory methods which allow for dynamically setting the attributes of such shapes. In the case of the Box, you can provide integer values for the number of width, height and depth segments which each face should be divided into. I understand the theory but lack the knowledge to begin subdividing geometry faces to achieve the desired effect. The vertices for each shape are most likely quite easy to generate in simple loops. What stumps me is knowing how to create the faces, or rather the appropriate texture coordinates for each face. Normals can be calculated per face so I'm fairly confident I could achieve what I want, it's just knowing where to start. Can anyone provide any details on procedural geometry? What I really need is some source code to glean some information from. I have searched high and low for tutorials but have so far come up with only a few reasonable sites or blogs. Any good books, tutorials, blogs or research papers would be appreciated. Edit based on comments I should have clarified that I know how to create vertices for basic shapes, most of these can probably be achieved by simple loops. What I don't comprehend is how to create faces from the generated array of vertices. How do I create a triangle strip, or triangles, from a seemingly unordered array of vertices? I assume that once I get past this point, I can create the normals from each face. Whilst I haven't delved into this yet, I have seen a lot of references to this and am sure it will be easy enough to implement. Ideally, i'd like to be able to generate geometry from a given set of properties such as the way that SceneKit provides. Given SceneKit has done it, and you can do similar things in Blender and Maya etc, I assume i'm not trying to implement the impossible. The final aspect would be applying textures. Again, this isn't something I have implemented but have read up on and am aware of the requirements. The main problem here is that I know what I want to achieve but am struggling to decipher how to implement for the aforementioned primitives. I was hopeful that I would be able to find some semblance of knowledge by way of source code but I really haven't come across anything suitable so far. |
39 | How can I tell whether an object is moving CW or CCW around a connected path? Lets say we have a jagged shape And two creatures moving along it's outline. Then we smooth the shape completely by pulling the corners out. We get this It is easy to see now that Orange is moving CW and green CCW. How can I tell in which direction they are moving without smoothing out the shape? New image |
39 | Intersect of vector and triangle side I'm trying to create a triangular touch surface for iOS where the user can drag around a point inside this triangle. Using information from this page, it is easy to figure out if the dragged point is inside or outside the triangle. However, I want to clip the point to the triangle edges if the user drags outside the triangle. This is easy for side AB and side AC, because I just have to set vectors u or v to zero respectively if the user's finger drags outside of these edges. However, I'm not sure how to find point p, on side BC. I need to find this point of intersection if the user drags their finger outside of edge BC. |
39 | Segment Cylinder intersection What is the complete code (C , pseudo, does not matter) for calculating the resulting segment (or its absence) for the intersection of asegment and a cylinder? The segment is defined by Vector3(x,y,z) as Start andVector3(x,y,z) as End. The cylinder is defined by the same parameters plus a value for Radius. |
39 | Splitting Graph into distinct polygons in O(E) complexity If you have seen my last question trapped inside a Graph Find paths along edges that do not cross any edges How do you split an entire graph into distinct shapes 'trapped' inside the graph(like the ones described in my last question) with good complexity? Please note each vertex has a fixed x,y position. What I am doing now is iterating over all edges and then starting to traverse while always taking the rightmost turn. This does split the graph into distinct shapes. Then I eliminate all the excess shapes (that are repeats of previous shapes) and return the result. The complexity of this algorithm is O(E2). I am wondering if I could do it in O(E) by removing edges I already traversed previously. My current implementation of that returns unexpected results. |
39 | 2D isometric screen to tile coordinates I'm writing an isometric 2D game and I'm having difficulty figuring precisely on which tile the cursor is. Here's a drawing where xs and ys are screen coordinates (pixels), xt and yt are tile coordinates, W and H are tile width and tile height in pixels, respectively. My notation for coordinates is (y, x) which may be confusing, sorry about that. The best I could figure out so far is this int xtemp xs (W 2) int ytemp ys (H 2) int xt (xs ys) 2 int yt ytemp xt This seems almost correct but is giving me a very imprecise result, making it hard to select certain tiles, or sometimes it selects a tile next to the one I'm trying to click on. I don't understand why and I'd like if someone could help me understand the logic behind this. Thanks! |
39 | Getting the bounding box of a sphere I have a sphere with values center,radius and I need to convert the sphere to a bounding box with values min,max. How do I convert a sphere into a bounding box? |
39 | Find extreme points of a rotated ellipse function on a given axis How to find the points where it is most extreme on the X and Y axis? For example lets say I have an equation that describes an ellipse that is rotated (x RadiusX Rx y RadiusX Ux) 2 (x RadiusY Ry y RadiusY Uy) 2 RadiusY 2 How can I find the points where it will be most extreme on each axis Please keep in mind the values for variables RadiusX, RadiusY, Rx, Ry, Ux, Uy are known. An example with values ((x 1 0.70711) (y 1 0.70711)) 2 ((x 1.414213 0.70711) (y 1.414213 0.70711)) 2 1.414213 1.414213 |
39 | Why is the depth test not done on geometry before rasterization? It seems the only time depth is used to discard data is during rasterization, i.e. at the fragment level. In the geometry stage, I've only see culling and clipping of vertices. Is it not possible to determine whether if a triangle is behind another triangle using just vertex data (depth?) and discard if so? Wouldn't this save all the work later in scan conversion? |
39 | Get triangles after subdivision I was able to do subdivision on a triangle mesh with midpoints but then I get polygons with too many faces. How do I convert to triangles after? Similar to this picture |
39 | Tilemap Collision Generation Tracing Polygons with Slopes I've been using Godot for my game, but it has a unfortunate quirks with tilemap collision where physics objects can bounce weirdly near tile seams and kinematic bodies can often get stuck in seams as well. For early alpha work, I've been able to get around this by not using tilemap collision and then tracing the contours by hand... Well this isn't going to scale well to 200 rooms (or whatever I end up having) and could easily lead to mismatched collision data, so I've decided that this process needs to be automated. Tilemaps have a few different tile types fully solid, diagonal slopes, and 2 1 slopes. My first attempt matched each of these types to a half tile size, marking each vertex as solid or not solid, and then combining all the vertices (with their overlap) into a single grid of vertices. I could then use all of these together by marching along the perimeter and probing ahead to detect corners. Turns out this works well for convex corners but has serious issues in concave corners, plus there isn't enough data to detect slopes properly, so it often resulted in extra slopey parts or cutting into slopes just slightly. From there I decided to add some extra information on the vertex grid, such as whether a vertex is a slope edge and doing clever things surrounding the properties of how it overlaps, but it's quickly blowing up into a lot of specific cases. I spent much of today thinking about adding vertex normals to each tile definition, but I realized it wasn't going to work without adding a bunch of different tile solidity types. Are there better approaches than what I'm thinking that don't involve brittle lossy intermediate representations or ridiculous amounts of special cases? |
39 | ray polygon intersection I am looking for an elegant way to do ray and polygon intersections in 2d. I don't care about languages. Now what I'm doing is taking a line that lies on the ray (with a screen length) and testing line to line intersections between this derived ray and all lines formed by each consecutive vertices in the polygon. This, for now, is good even if I don't know which parts of the ray are inside or outside the polygon (in cases it is concave one). But for now I don't need this information, even if a general approach could maybe give it. I don't care about self intersection polygons. Any advice? Maybe triangulating the polygons could be an idea? |
40 | Can't get Direct3D11 depth buffer to work I can't get the depth buffer to work correctly. I am rendering 2 cubes in a single Draw function, and from one angle it looks great But swing the camera around to view the opposite sides, and I discover it's just Painter's Algorithm. This is my code to setup the depth stencil buffer void Graphics CreateDepthStencilBuffer(ID3D11Texture2D backBuffer) D3D11 TEXTURE2D DESC dsTextureDesc backBuffer gt GetDesc( amp dsTextureDesc) dsTextureDesc.Format DXGI FORMAT D24 UNORM S8 UINT dsTextureDesc.Usage D3D11 USAGE DEFAULT dsTextureDesc.BindFlags D3D11 BIND DEPTH STENCIL dsTextureDesc.MipLevels 1 dsTextureDesc.ArraySize 1 dsTextureDesc.CPUAccessFlags 0 dsTextureDesc.MiscFlags 0 Microsoft WRL ComPtr lt ID3D11Texture2D gt dsBuffer ASSERT SUCCEEDED(g Device gt CreateTexture2D( amp dsTextureDesc, NULL, dsBuffer.ReleaseAndGetAddressOf())) D3D11 DEPTH STENCIL DESC dsDesc dsDesc.DepthEnable true dsDesc.DepthWriteMask D3D11 DEPTH WRITE MASK ALL dsDesc.DepthFunc D3D11 COMPARISON LESS ... snipped stencil properties ... ASSERT SUCCEEDED(g Device gt CreateDepthStencilState( amp dsDesc, g DepthStencilState.GetAddressOf())) g pDevCon gt OMSetDepthStencilState(g DepthStencilState.Get(), 1) D3D11 DEPTH STENCIL VIEW DESC depthStencilViewDesc depthStencilViewDesc.Format dsTextureDesc.Format depthStencilViewDesc.ViewDimension D3D11 DSV DIMENSION TEXTURE2D depthStencilViewDesc.Texture2D.MipSlice 0 ASSERT SUCCEEDED(g Device gt CreateDepthStencilView(dsBuffer.Get(), amp depthStencilViewDesc, g depthStencilView.GetAddressOf())) And I'm calling g pDevCon gt OMSetRenderTargets(1, g renderTargetView.GetAddressOf(), g depthStencilView.Get()) after every Present call. I've been scratching my head for ages wondering what the problem is. Any clues will be much appreciated! Edit I used the graphics debugger, and apparently the output merger is doing its job, as seen in the screenshot below, but that isn't what I am seeing on screen. At the top of the pic is the state of the depth buffer, but I can't make sense of it to determine if it's correct or not. |
40 | ID3D11DeviceContext Map Unmap bottleneck I maintain a small rendering engine that displays models in Direct3D 9. I'm currently migrating this to Direct3D 11, and I've hit a snag in how I display points on the model. Using a sphere mesh, I translate to each point in the local coordinates, and draw the mesh. In Direct3D 9, I was using the fixed function pipeline for the drawing of the mesh. So, I would just change the world matrix for each point and draw the mesh. Obviously in Direct3D 11, I need to use shaders and update constant buffers. The problem I've run into is the high volume of Map Unmap calls I need to make so that I can set the constant buffers with the updated world matrix. This is creating a bottleneck and the framerate is slower than the Direct3D 9 implementation. If I remove the Map Unmap calls, performance is much better, but obviously the meshes are in the wrong spot when I do this. Am I missing something? Is there a better way to be doing this? It seems strange to me that I can't mirror the same performance I got from Direct3D 9. One thing I should note is that this is done in C CLI, so there is some marshaling going on, but from what I can tell via benchmarking, that doesn't have anything to do with the slow down. Here is my code D3D11 MAPPED SUBRESOURCE mappedResource HRESULT result result device gt UnmanagedContextPointer gt Map(d3dBuffer, 0, D3D11 MAP WRITE DISCARD, 0, amp mappedResource) Marshal StructureToPtr(data, IntPtr(mappedResource.pData), false) device gt UnmanagedContextPointer gt Unmap(d3dBuffer, 0) Any insight would be greatly appreciated, thanks! |
40 | When would a GPU need to write data to a vertex (or other) buffer? I'm trying to understand why, when and how a GPU would need to write data to some buffer inside its own Video Ram. In Direct3D 11, there are two flags that concern this, which are D3D11 USAGE DEFAULT and D3D11 USAGE IMMUTABLE. The former gives the ability to read and write, the latter only the ability to read. So, apart from the resources that imply actual rendering to, like back buffers or textures set to use in a render to texture scenario, is there any other case where one would prefer to use DEFAULT over IMMUTABLE? How would you get the GPU to change this data? (I guess from a shader?) Why is it not the default one the one that gives you read only access? |
40 | Downscaling texture via mipmap Copied from Computer Graphics SE. I am implementing a post processing effect in my DirectX 11 pet renderer. The post processing pass is implemented by rendering a full screen quad covered with texture containing original rendered image, which works as it should, but I have problems with downscaling the texture. The non processed testing scene looks like this (three very bright emmissive spheres) I see no problem at this stage, but when I run the first post processing pass, which just down scales the image by the factor of 8 using the texture sampler, the result is very flickery (up scaled for clarity) I expected a mipmap would solve or at least reduce the flickering, but it didn't change a thing. What am I doing wrong? RenderDoc Update After investigating the issue using RenderDoc I found that the mipmap is being generated successfully and it's third level looks like this However, the output of the down scaling pass looks like this As if the sampler didn't use the mipmap at all. Don't get distracted by coloured object instead almost white ones. I lowered the sphere brightness a bit while investigating the bug. Even if I choose the mipmap level explicitly float4 vColor s0.SampleLevel(LinearSampler, Input.Tex, 3) it changes nothing RenderDoc also says "LOD Clamp 0 0" for the used sampler. What is it? Couldn't this be the problem? DirectX details Samplers D3D11 SAMPLER DESC descSampler ZeroMemory( amp descSampler, sizeof(descSampler)) descSampler.AddressU D3D11 TEXTURE ADDRESS CLAMP descSampler.AddressV D3D11 TEXTURE ADDRESS CLAMP descSampler.AddressW D3D11 TEXTURE ADDRESS CLAMP descSampler.Filter D3D11 FILTER MIN MAG MIP LINEAR mDevice gt CreateSamplerState( amp descSampler, amp mSamplerStateLinear) descSampler.Filter D3D11 FILTER MIN MAG MIP POINT hr mDevice gt CreateSamplerState( amp descSampler, amp mSamplerStatePoint) ...are set right before rendering the screen quad ID3D11SamplerState aSamplers mSamplerStatePoint, mSamplerStateLinear mImmediateContext gt PSSetSamplers(0, 2, aSamplers) ...and used within the down scaling PS shader SamplerState PointSampler register (s0) SamplerState LinearSampler register (s1) Texture2D s0 register(t0) float4 Pass1PS(QUAD VS OUTPUT Input) SV TARGET return s0.Sample(LinearSampler, Input.Tex) Texture D3D11 TEXTURE2D DESC descTex ZeroMemory( amp descTex, sizeof(D3D11 TEXTURE2D DESC)) descTex.ArraySize 1 descTex.BindFlags D3D11 BIND RENDER TARGET D3D11 BIND SHADER RESOURCE descTex.MiscFlags D3D11 RESOURCE MISC GENERATE MIPS descTex.Usage D3D11 USAGE DEFAULT descTex.Format DXGI FORMAT R32G32B32A32 FLOAT descTex.Width width descTex.Height height descTex.MipLevels 0 descTex.SampleDesc.Count 1 device gt CreateTexture2D( amp descTex, nullptr, amp tex) ...it's render target view D3D11 RENDER TARGET VIEW DESC descRTV descRTV.Format descTex.Format descRTV.ViewDimension D3D11 RTV DIMENSION TEXTURE2D descRTV.Texture2D.MipSlice 0 device gt CreateRenderTargetView(tex, amp descRTV, amp rtv) ...it's shader resource view D3D11 SHADER RESOURCE VIEW DESC descSRV ZeroMemory( amp descSRV, sizeof(D3D11 SHADER RESOURCE VIEW DESC)) descSRV.Format descTex.Format descSRV.ViewDimension D3D11 SRV DIMENSION TEXTURE2D descSRV.Texture2D.MipLevels (UINT) 1 descSRV.Texture2D.MostDetailedMip 0 device gt CreateShaderResourceView(tex, amp descSRV, amp srv) Explicit generation of mipmap is called after the scene was rendered into the texture and another texture was set as a render target. ID3D11RenderTargetView aRTViews 1 mPass1Buff.GetRTV() mImmediateContext gt OMSetRenderTargets(1, aRTViews, nullptr) mImmediateContext gt GenerateMips(mPass0Buff.GetSRV()) ID3D11ShaderResourceView aSRViews 1 mPass0Buff.GetSRV() mImmediateContext gt PSSetShaderResources(0, 1, aSRViews) The code is compiled in debug and the D3D device was created with D3D11 CREATE DEVICE DEBUG flag and I get no runtime errors on the console. |
40 | How can I read texel data on a separate thread in D3D11? In D3D10, I load a staging texture onto my GPU memory, then map it in order to access its texel data on the CPU. This is done on a separate thread, not the thread I render with. I just call the device methods, and it works. In D3D11 I load the staging texture onto my GPU, but to access it (i.e. Map it) I need to use the Context, not the device. Can't use the immediate context, since the immediate context can only be used by a single thread at a time. But I also can't use a deferred context to Read from the texture to the CPU "If you call Map on a deferred context, you can only pass D3D11 MAP WRITE DISCARD, D3D11 MAP WRITE NO OVERWRITE, or both to the MapType parameter. Other D3D11 MAP typed values are not supported for a deferred context." http msdn.microsoft.com en us library ff476457.aspx Ok, so what am I supposed to do now? It is common to use textures to store certain data (heightmaps for instance) and you obviously have to be able to access that data for it to be useful. Is there no way for me to do this in a separate thread with D3D11? |
40 | Usage of render states in Direct3D 11.x? I know there are four different render states. They are Blend State Depth Stencil State Rasterizer State Sampler State One of my Direct3D reference books say that Direct3D is a state machine. But how, where do we use these in game development? |
40 | Tessellation Texture Coordinates Firstly some info I'm using DirectX 11 , C and I'm a fairly good programmer but new to tessellation and not a master graphics programmer. I'm currently implementing a tessellation system for a terrain model, but i have reached a snag. My current system produces a terrain model from a height map complete with multiple texture coordinates, normals, binormals and tangents for rendering. Now when i was using a simple vertex and pixel shader combination everything worked perfectly but since moving to include a hull and domain shader I'm slightly confused and getting strange results. My terrain is a high detail model but the textured results are very large patches of solid colour. My current setup passes the model data into the vertex shader then through the hull into the domain and then finally into the pixel shader for use in rendering. My only thought is that in my hull shader i pass the information into the domain shader per patch and this is producing the large areas of solid colour because each patch has identical information. Lighting and normal data are also slightly off but not as visibly as texturing. Below is a copy of my hull shader that does not work correctly because i think the way that i am passing the data through is incorrect. If anyone can help me out but suggesting an alternative way to get the required data into the pixel shader? or by showing me the correct way to handle the data in the hull shader id be very thankful! cbuffer TessellationBuffer float tessellationAmount float3 padding struct HullInputType float3 position POSITION float2 tex TEXCOORD0 float3 normal NORMAL float3 tangent TANGENT float3 binormal BINORMAL float2 tex2 TEXCOORD1 struct ConstantOutputType float edges 3 SV TessFactor float inside SV InsideTessFactor struct HullOutputType float3 position POSITION float2 tex TEXCOORD0 float3 normal NORMAL float3 tangent TANGENT float3 binormal BINORMAL float2 tex2 TEXCOORD1 float4 depthPosition TEXCOORD2 ConstantOutputType ColorPatchConstantFunction(InputPatch lt HullInputType, 3 gt inputPatch, uint patchId SV PrimitiveID) ConstantOutputType output output.edges 0 tessellationAmount output.edges 1 tessellationAmount output.edges 2 tessellationAmount output.inside tessellationAmount return output domain("tri") partitioning("integer") outputtopology("triangle cw") outputcontrolpoints(3) patchconstantfunc("ColorPatchConstantFunction") HullOutputType ColorHullShader(InputPatch lt HullInputType, 3 gt patch, uint pointId SV OutputControlPointID, uint patchId SV PrimitiveID) HullOutputType output output.position patch pointId .position output.tex patch pointId .tex output.tex2 patch pointId .tex2 output.normal patch pointId .normal output.tangent patch pointId .tangent output.binormal patch pointId .binormal return output Edited to include the domain shader domain("tri") PixelInputType ColorDomainShader(ConstantOutputType input, float3 uvwCoord SV DomainLocation, const OutputPatch lt HullOutputType, 3 gt patch) float3 vertexPosition PixelInputType output Determine the position of the new vertex. vertexPosition uvwCoord.x patch 0 .position uvwCoord.y patch 1 .position uvwCoord.z patch 2 .position output.position mul(float4(vertexPosition, 1.0f), worldMatrix) output.position mul(output.position, viewMatrix) output.position mul(output.position, projectionMatrix) output.depthPosition output.position output.tex patch 0 .tex output.tex2 patch 0 .tex2 output.normal patch 0 .normal output.tangent patch 0 .tangent output.binormal patch 0 .binormal return output |
40 | Deferred contexts and inheriting state from the immediate context I took my first stab at using deferred contexts in DirectX 11 today. Basically, I created my deferred context using CreateDeferredContext() and then drew a simple triangle strip with it. Early on in my test application, I call OMSetRenderTargets() on the immediate context in order to render to the swap chain's back buffer. Now, after having read the documentation on MSDN about deferred contexts, I assumed that calling ExecuteCommandList() on the immediate context would execute all of the deferred commands as "an extension" to the commands that had already been executed on the immediate context, i.e. the triangle strip I rendered in the deferred context would be rendered to the swap chain's back buffer. That didn't seem to be the case, however. Instead, I had to manually pull out the immediate context's render target (using OMGetRenderTargets()) and then set it on the deferred context with OMSetRenderTargets(). Am I doing something wrong or is that the way deferred contexts work? |
40 | Peculiar problem rendering specific triangles I've encountered a strange problem with a peculiar not quite solution. This problem is that certain polygons aren't rendered unless I run 'fraps'. Needless to say, I would much rather my program to run without 3rd party programs. So I was wondering what part of my directx 11 code would be the cause of this. Thanks in advance, please ask for code fragments if they're needed! |
40 | How can I read texel data on a separate thread in D3D11? In D3D10, I load a staging texture onto my GPU memory, then map it in order to access its texel data on the CPU. This is done on a separate thread, not the thread I render with. I just call the device methods, and it works. In D3D11 I load the staging texture onto my GPU, but to access it (i.e. Map it) I need to use the Context, not the device. Can't use the immediate context, since the immediate context can only be used by a single thread at a time. But I also can't use a deferred context to Read from the texture to the CPU "If you call Map on a deferred context, you can only pass D3D11 MAP WRITE DISCARD, D3D11 MAP WRITE NO OVERWRITE, or both to the MapType parameter. Other D3D11 MAP typed values are not supported for a deferred context." http msdn.microsoft.com en us library ff476457.aspx Ok, so what am I supposed to do now? It is common to use textures to store certain data (heightmaps for instance) and you obviously have to be able to access that data for it to be useful. Is there no way for me to do this in a separate thread with D3D11? |
40 | Is non indexed, non instanced rendering useful anymore? I'm adding batched rendering to my game engine and I'm wondering Should I support non indexed, non instanced batches or just indexed and or instanced? It's my understanding that the concept of indexed rendering was invented after pure "vertex only" drawing. That said, is supporting vertex only rendering useful anymore? Is there a modern use case for it? |
40 | Why specular reflection work only in center of virtual scene? How to calculate this cpecular reflection? HLSL void calculateSpecular( in float4 Normal, in float4 SunLightDir, inout float4 Specular ) Specular specularLevel pow(saturate(dot(reflect(normalize(abs(eyePosition)), Normal), SunLightDir)), specularExponent) in pixel shader float4 Specular float4(0.f,0.f,0.f,1.f) calculateSpecular( input.normal, sunLightDir, Specular ) sunLightDir it's just camera position vec3(0,10,0) in vertex shader output.normal mul( float4( input.normal, 0.f ) , World ) |
40 | Is object space the same as local space? I was in directx 11 and was wondering is local space the same as object space and if not, what is object space? |
40 | Sharpdx DirectX11 MapSubresource is failing trying to map a staging texture I'm trying to render to a texture and then pull the image data out. I've created one texture as a render target and another as a staging texture. After rendering to the render target, I use CopyResource to copy from the render target texture to the staging texture. So far, so good. However, when I use DeviceContext.MapSubresource to get the data from the staging texture, I get E INVALIDARGS exception, and I can't figure out why. Here is how I create the staging texture textureDesc new Texture2DDescription() ArraySize 1, BindFlags BindFlags.None, CpuAccessFlags CpuAccessFlags.Read, Format Format.B8G8R8A8 UNorm, Height 256, MipLevels 1, SampleDescription new SampleDescription(1, 0), Usage ResourceUsage.Staging, Width 256 renderStaging new Texture2D( dev, textureDesc) Here is how I populate the staging texture and then try to map it DataStream stream con.CopyResource( renderTarget, renderStaging) The following is the line that fails box con.MapSubresource( renderStaging, 0, MapMode.Read, MapFlags.None, out stream) I have the same code working in C , so I know I have the general idea right. This is the working C code textureDesc.Width 256 textureDesc.Height 256 textureDesc.MipLevels 1 textureDesc.ArraySize 1 textureDesc.Format DXGI FORMAT B8G8R8A8 UNORM textureDesc.SampleDesc.Count 1 textureDesc.MiscFlags 0 textureDesc.Usage D3D11 USAGE STAGING textureDesc.BindFlags 0 textureDesc.CPUAccessFlags D3D10 CPU ACCESS READ hr dev gt CreateTexture2D( amp textureDesc, NULL, amp renderStagingTexture) devcon gt CopyResource(renderStagingTexture, renderTargetTextureMap) D3D11 MAPPED SUBRESOURCE mappedResource devcon gt Map(renderStagingTexture, 0, D3D11 MAP READ, 0, amp mappedResource) |
40 | Primitives LINESTRIP Closing to the first point? I'm doing a exercise form the Frank Luna book. It ask to draw a LineStrip that looks like the red line in the picture. I'm using md3dImmediateContext gt IASetPrimitiveTopology(D3D11 PRIMITIVE TOPOLOGY LINESTRIP) I'm getting the line that is yellow,blue and black and it is looping. How can I make it look like the red line? My vertex that has the points Vertex vertices XMFLOAT3( 5.0f, 2.0f, 0.0f) , XMFLOAT3( 4.0f, 2.0f, 0.0f) , XMFLOAT3( 3.0f, 2.0f,0.0f) , XMFLOAT3( 2.0f, 2.0f, 0.0f) , XMFLOAT3( 1.0f, 2.0f, 0.0f) , XMFLOAT3(0.0f, 2.0f, 0.0f) My indices code is UINT indices 0,1,2,3,4,5 |
40 | How do I draw a full screen quad in DirectX 11? How do I draw a full screen quad that shows red on the screen? |
40 | How to blend multiple normal maps? I want to achieve a distortion effect which distorts the full screen. For that I spawn a couple of images with normal maps. I render their normal map part on some camera facing quads onto a rendertarget which is cleared with the color (127,127,255,255). This color means that there is no distortion whatsoever. Then I want to render some images like this one onto it If I draw one somewhere on the screen, then it looks correct because it blends in seamlessly with the background (which is the same color that appears on the edges of this image). If I draw another one on top of it then it will no longer be a seamless transition. For this I created a blendstate in directX 11 that keeps the maximum of two colors, so it is now a seamless transition, but this way, the colors lower than 127 (0.5f normalized) will not contribute. I am not making a simulation and the effect looks quite convincing and nice for a game, but in my spare time I am thinking how I could achieve a nicer or a more correct effect with a blend state, maybe averaging the colors somehow? I I did it with a shader, I would add the colors and then I would normalize them, but I need to combine arbitrary number of images onto a rendertarget. This is my blend state now which blends them seamlessly but not correctly D3D11 BLEND DESC bd bd.RenderTarget 0 .BlendEnable true bd.RenderTarget 0 .SrcBlend D3D11 BLEND SRC ALPHA bd.RenderTarget 0 .DestBlend D3D11 BLEND INV SRC ALPHA bd.RenderTarget 0 .BlendOp D3D11 BLEND OP MAX bd.RenderTarget 0 .SrcBlendAlpha D3D11 BLEND ONE bd.RenderTarget 0 .DestBlendAlpha D3D11 BLEND ZERO bd.RenderTarget 0 .BlendOpAlpha D3D11 BLEND OP MAX bd.RenderTarget 0 .RenderTargetWriteMask 0x0f Is there any way of improving upon this? (PS. I considered rendering each one with a separate shader incementally on top of each other but that would consume a lot of render targets which is unacceptable) |
Subsets and Splits