Jump to content

SomaZ

Members
  • Posts

    420
  • Joined

  • Last visited

Everything posted by SomaZ

  1. no, you can just combine them or take just one. The people at Frostbite, for an example, simply leave the ao completely aside and only use the cavity occlusion. (Thats how I read their "workshop" to get pbr working)
  2. No, you don't need to add anything or change in the shader. Thats how I understand it. Strange colors should be normal. They aren't for showing. Adding them in the shader is just for debugging purposes.
  3. added mip-map convolution with ggx distribution using importance sampling old standart mip mapping code (simply downscaling the image) with pre-filtering (1024 samples) Takes some time to see the differences, for someone whos not knowing, what its supposed to do. But I'm positive that this was the biggest part on getting IBL working in the engine.
  4. Hey, no problem. You do fine. Seriously. When I see questions regarding the renderer, I'm here to answer them. Takes some time to fully understand, or dont get confused on how everything works together, especially when you worked like 10 years with another system. (How long have you been doing 3d stuff DT85?)
  5. Yea, I'm aware of all of this. I just didn't want to confuse people right now, since most of the people in this forum are not familiar with the pbr workflow and need to learn it from the beginning. I will rename it at some point, but now, I will stick with it.
  6. AO shall go into the specular map blue channel. Don't apply it to the albedo. We would loose correct specular color for this. So maps we need for pbr: Albedo RGBA: RGB Albedo Color A Transparency (optional) Specular RGB R Roughness Grayscale G Metal Mask B Occlusion (Cavity Occlusion, Ambient Occlusion or a combined one, it doesn't matter, just artists choise) Normal RGBA RGB Normal Map A Height Map (optional) Thats material layout right now. Though Occlusion doesn't get handled right now. Will do this when ready with IBL stuff.
  7. Right now, it's not possible to use IES data for dynamic lighting. Though I though of implementing it with the other types of dynamic lights. Well. let's see if I get to that point in the near future. @@DT85 Nice shots. The difference of deluxe mapping gets more obvious when you dissable cubemapping. @@Almightygir Deluxemapping is part of the map compiler. It adds a texture to all the compiled brushes and model work, just like the lightmap, but instead it saves a vector that points to the original light positions (like some kind of normal map). If there is more than one light affecting the surface, the light positions get averaged. This can look really bad if there are some lights from very diffenrent angles affecting a surface. Pro: No need for dynamic lights and support for older maps (as long as they where compiled with deluxemapping), also precompiled lightmaps, Con: Can look bad in some cases.
  8. I guess, I will make some videos, when I'm ready with the IBL implementation. Maybe even share it on other sites, to attract more people. I'm pretty sure, there will be some interrest. Maybe even help with some texturing and optimizing the maps for the new renderer.
  9. Right now, I'm so wasted, but there will be some nice progress. Literally sitting in a club, getting some rest from the rest of the world. I'm very happy, that there are some people, trying to fix or improve stuff of the renderer. A big thank you, to all of you.
  10. Jep, proper support for IBL. What I did, was manually set the mip-maps for the cubemap. So the correct answer is: convoluting cubemaps.
  11. Nonono, Last hint: Its part of the IBL evaluation.
  12. Next Hint: When it does what its supposed to do, it will be part of the pbr evaluation.
  13. Jep, on cubemaps. But they allready worked before. There is something special about this picture. Little hint: I havent changed the shaders.
  14. Let's play a little game. What am I working on?
  15. No, actually we need it for every single model which is loaded by the renderer, as far as I understood. But glm has the highest priority imo. Testing tangents with md3 is totally legit.
  16. @@Almightygir Can't we simply take the mikktspace sources you provided to calculate the tangents and bitangents in the renderer with it? The result should be the same, or am I mistaken?
  17. Shader is a mess right now. Actually I want to rewrite some parts of it, but first I want to add some other stuff to the renderer. Else we do things twice or trice. I'd love someone else to check the shader. Let's say, I will mention you if the renderer is, let's say, prepared. Right now it uses a wild mix of stuff, like a mixed specular brdf from Renaldas siggraph15 and frostbite. (super gross). @@DT85 Thank you for the kind words. Besides that. Have you tried compiling your maps with deluxe mapping? This kind of works now. It saves light directions per surface. If there is more than one light affecting the surface, the light direction gets averaged (this looks shitty, but maybe your maps can work like that)
  18. https://github.com/DT85/OpenDF2/commit/d2f53319d5f75d8e1b7ff1f95b0e5def7b7ff188#diff-ea1716bf94809f6c7a58650ab2f063f1L3511
  19. Yea, I know. I hoped nobody will notice it, since I fixed it already and didn't want to make new screenshots. I found a valid fix though. The tangent and bitangent buffers just need to be zeroed at initialization. That's it. Damn, this took a while...
  20. I kind of found a solution. It's ugly coding tough. http://imgur.com/a/0UTbu Maybe someone can find a real coding solution for it. I added some vector zeroing in tr_ghoul2. I'm not sure why this is needed, code doesn't look like its nessesarry. Now the code doesn't average the tangents of a surface anymore, but takes the tangent from a single triangle thats part of the surface. Edit: @@AshuraDX Edit2: Code change: It averages the tangents again, but still have to clear them before.
  21. Pics for @Archangel35757 : http://imgur.com/a/hVA4D Code for @@DT85: #if defined(USE_LIGHT) && !defined(USE_FAST_LIGHT) //possible things to show: var_Tangent.xyz, var_Bitangent.xyz, var_Normal.xyz, N out_Color.rgb = var_Tangent.xyz * 0.5 + vec3(0.5); #endif Just place it at the end of the main function of the fragment code. Soooo, that means, tangent code is completely off for glm models. Now I need to find out where the problem is ?_?
  22. Sure thing. I will make some pics, while I try to find a solution for this mess.
  23. Yea, instead of fixing the problem, I kind of recreated it. Which is kind of a good clue...
  24. Yea, already did that two days ago. I also played around with the bitangent scaling. Or better, changed it a bit. The md3 normals looked the same after changing the bitangent Code, so I think the problem lies there. I just had no time to investigate further. I have some time tomorrow, so maybe I'll figure it out whats wrong with it.
  25. My guess is, that the bitangent is scaled wrongly for the glm. As I already said, I will look at it, when I have some time. Thank you all, for tracking down the issue.
×
×
  • Create New...