[unixODBC-dev] Re: Bug ? SQLGetDiagFieldW() assume buffer lengths in SQLWCHAR instead of bytes
Marc.Herbert at continuent.com
Mon Jan 16 09:51:00 GMT 2006
Nick Gorham <nick.gorham at easysoft.com> writes:
> I could be wrong, but the code you mentioned is in the Ansi entry
> point SQLGetDiagField, and its used when the driver only has the
> unicode entry point.
Exactly, that's why it's the trickiest case.
> So the driver is allocating a buffer thats big
> enough to handle the ansi buffer in unicode, so it then can convert it
> back to ansi on return.
Right, and I did actually not suggest to change that. I just added a
variable for convenience (just like you did?).
> I agree with you however on the length thats passed into the driver
> being wrong,
Yes, that's what I was trying to point to: the size passed to the
driver (thus under-estimating the malloc'ed memory) and also the size
> But maybe this is correct:
> What do you think?
I think your patch is equivalent to mine concerning multiplication and
division of sizes by sizeof(SQLWCHAR); we seem to agree on this!
However I still don't get why you malloc() a couple of extra bytes
the size of +1 (wide) character. Just an extra precaution?
I can't harm much anyway...
More information about the unixODBC-dev