UINT in HLSL
category: general [glöplog]
Is there some sort of restriction for how you can use uints in HLSL? The MSDN page for HLSL scalar types doesn't seem to mention anything special regarding them. The problem is that if I declare a variable as uint in my HLSL pixel shader, the program crashes. Signed ints work (and unsigned int in GLSL works).
This is on DX10 / GF7700 btw, using the ps_3_0 profile.
This is on DX10 / GF7700 btw, using the ps_3_0 profile.
are you sure unsigned int in GLSL works on a card that is less than shader model 4? i.e. does it works also on your setup?
Quote:
are you sure unsigned int in GLSL works on a card that is less than shader model 4? i.e. does it works also on your setup?
Yep, it works on this rig. It also works on a GF6200.
Quote:
DX10 / GF7700 btw, using the ps_3_0 profile
Does not compute.
It doesn't make sense. DX10 does not work on this card. And integer arithmetic is something new in Shader Model 4.0
imbusy: i think if a card doesn't support DX10 features it doesn't matter, it still runs the application as long as you don't use any of those features (in other words d3d10.dll is ABI compatible) as far as i know. but i really am not the one to speak about DX10, smash, ryg or chaos might be more of a help.
mic_: on the other hand, for integer operations on the GPU you need shader model 4, and unfortunate for you, none of the cards you cited does support SM4.0
mic_: on the other hand, for integer operations on the GPU you need shader model 4, and unfortunate for you, none of the cards you cited does support SM4.0
uhm, but since they removed the caps-bits, how can one know if a program will run or not?
to be a dx10 compatible card they must support the whole dx10 specification. thats one of the reasons why microsoft removed the caps bits. (to get less card specific code).
Yeah, that's what I thought. So how come mic_ is able to run DX10 on gf6 and gf7?
he can't, period. he's using dx9, obivously :)
while trying to use DX10-features, obviously :)
while he should be coding GBA, obviously :)
@graga: GBA? You've had one too many Faxe to drink. I'm coding on Sega consoles these days. Sega does what Nintendon't, y'know? :P
@rest: What I'm trying to do is to divide a value by some other value, then take the lowest 8 bits of the result (to get a value in the range 0..255). Since I can't use bitwise operators, the way I did it in GLSL was like this:
Works fine on the two cards I mentioned. Trying to do a similar operation in HLSL doesn't, because it crashes just from including the variable declaration.
@rest: What I'm trying to do is to divide a value by some other value, then take the lowest 8 bits of the result (to get a value in the range 0..255). Since I can't use bitwise operators, the way I did it in GLSL was like this:
Code:
uintVar=floor(floatVar);
uintVar/=someValue;
uintVar=mod(float(uintVar),256.0);
floatVar=float(uintVar);
Works fine on the two cards I mentioned. Trying to do a similar operation in HLSL doesn't, because it crashes just from including the variable declaration.
mic_: my guess is that your shader fails to compile, and you end up trying to make COM-calls to an uninitialized pointer. Which is a very bad idea. Make sure you check return-codes, especially on object creation.
Well, yeah, it crashes when I change a single variable declaration from int to uint, so I guess it's pretty obvious that it fails to compile the shader. What I'd like to know is why it fails. Or I'll just have to see if I can use another workaround in HLSL..
no error message returned? :)
mic_: because you need shader model 4 and DX10 to use "uint" in shaders. It's not a supported datatype in shader model 3. This has been pointed out many times in this thread already.
That being said, uint works just fine on my gf8 in dx9. But I can't find it mentioned in the documentation for other purposes than DX10. Anyway, I gave your code a go, and it didn't compile, due to the use of mod(). When I replaced that with "%", the code worked. But again, I don't think you can reliably use uint in your DX9-code - I suspect the support is only there in the compiler for DX10-purposes, and that it's a bug that it doesn't prevent you from using it. But I could be wrong, of course.
@moose: Decipher and imbusy mentioned "integer operations/arithmetic". They didn't make any distinction between signed and unsigned, and signed ints are functioning through HLSL on my GF7.
@gargaj: None checked. I'm using Hitchhikr's DX framework to load the shader, and it's written to be as small as possible, not to check for errors. I guess I could make a debugging sandbox for HLSL shaders, but I was mostly just curious about why it would work in GLSL but not in HLSL and ass-u-med that someone here would've run into this before since there are people here who do a lot of shader coding.
I'll just use something else in HLSL, it's not a major issue.
@gargaj: None checked. I'm using Hitchhikr's DX framework to load the shader, and it's written to be as small as possible, not to check for errors. I guess I could make a debugging sandbox for HLSL shaders, but I was mostly just curious about why it would work in GLSL but not in HLSL and ass-u-med that someone here would've run into this before since there are people here who do a lot of shader coding.
I'll just use something else in HLSL, it's not a major issue.
@moose: The code I pasted was GLSL. I'm using fmod in the HLSL code.
whatever. just use GLSL it's better. it's not me who said it, it's mic_ ! :)
the problem is that at least in dx9, uint doesnt appear to be a valid type (when i checked it in fxcomposer, anyway). even though it claims to be in the hlsl docs. so thats why your shader doesnt compile. compilers spit out warnings and errors for a reason, you know.. :) could you not just use int instead, though?
on ps3.0 profiles/dx9 pixelshaders it'll all be compiled using floats anyway, so hey. :)
on ps3.0 profiles/dx9 pixelshaders it'll all be compiled using floats anyway, so hey. :)
mic_: What I tried to say was "Just because it runs, doesn't mean it's correct".
Smash: sure you didn't meant "could you not just use float instead, though?"
Smash: sure you didn't meant "could you not just use float instead, though?"
Yeah, I've fixed it already, using an extra if-clause.