View previous topic :: View next topic |
Author |
Message |
haru-s
Joined: 28 Mar 2009 Posts: 8 Location: Japan
|
Posted: Fri Oct 29, 2010 9:37 am Post subject: Some char-based types must be ubyte-based types |
|
|
Hi!
The LPSTR is defined by "alias char* LPSTR".
But D's char is a utf-8 string, so LPSTR must be defined by "alias ubyte* LPSTR".
This problem is broadly applicable to other points of the Bindings. |
|
Back to top |
|
|
doob
Joined: 06 Jan 2007 Posts: 367
|
Posted: Sat Oct 30, 2010 2:22 am Post subject: |
|
|
That would cause a lot of casting between char* and ubyte* which would be very annoying and since all valid ASCII is valid UTF-8 I don't think it's any problem. "The first 128 characters of the Unicode character set (which correspond directly to the ASCII) use a single octet with the same binary value as in ASCII." - http://en.wikipedia.org/wiki/UTF-8 |
|
Back to top |
|
|
haru-s
Joined: 28 Mar 2009 Posts: 8 Location: Japan
|
Posted: Mon Nov 01, 2010 8:53 am Post subject: |
|
|
Your ideas are right at a point of binary semantics.
But some types, such as LPSTR of WAVEHDR.lpData, don't have character semantics.
For example;
import win32.mmsystem;
char* buffer = new char[BUFFER_LENGTH];
WAVEHDR header;
header.lpData = buffer;
You will have a error message "invalid UTF-8 sequence", because any strings are validated.
import win32.mmsystem;
ubyte[] buffer = new ubyte[BUFFER_LENGTH];
WAVEHDR header;
header.lpData = cast(LPSTR)buffer;
If a type of WAVEHDR.lpData is ubyte*, you will not use cast(LPSTR).
However, the cause of this problem is that Win32 API use char* to binary sequence, probably... |
|
Back to top |
|
|
|