What's new

DirectX and Chip8

glVertex3f

Crap Cracker
I know alot of you did your chip8 emulators with directX. My question is what is a good way to draw the "pixels" for a Chip8 emulator.

The way I tried was extremely slow and I figure that something this simple, I dont need to use the camera stuff.

Hers how I was doing it
Code:
void C8video::RenderScreen ()
{

	gD3D->Begin ();
	gD3D->ClearColor ( 0xFF000000 );
	
	for ( float x = 0; x < max_width; x++ ) {

		for ( float y = 0; y < max_height; y++ ) {

			
			if ( Display [ static_cast <int> ( x ) + ( static_cast <int> ( y ) * 
				static_cast <int> ( max_width ) ) ] != 0 ) {
			  
				gD3D->DrawSquare ( x, y, pixel_width, pixel_height, 0xFFFFFFFF );

			}

			else
				gD3D->DrawSquare ( x, y, pixel_width, pixel_height, 0xFF000000 );

		}

	}

	gD3D->End ();

}

Here is DrawSquare
Code:
void D3Dclass::DrawSquare ( float posx, float posy, float width, float height, DWORD color )
{

	DxVertex temp_square [] = {

		{   posx * width          ,   posy * height           , 1.0f, 1.0f, color },
		{ ( posx * width ) + width,   posy * height           , 1.0f, 1.0f, color },
		{   posx * width          , ( posy * height ) + height, 1.0f, 1.0f, color },
		{ ( posx * width ) + width, ( posy * height ) + height, 1.0f, 1.0f, color },

	};

	Render ( temp_square, 4 );

}

where DxVertex is like so
Code:
struct DxVertex
{

	float x, y, z, w;

	DWORD color;

};

I am very new to directX and cant seem to find the right answers anywhere.
 
Last edited:

Doomulation

?????????????????????????
The methods you use is unknown to me. But that's maybe because of a diffrent version.
In any case, I drew my pixels using two triangles. And I left my source on the forum too, so you may look up in the source if you wish.
It also seemed like the vertexes seemed to slow it down very drastically, so I basically rendered the data on-the-fly without any vertex buffers.
 
OP
glVertex3f

glVertex3f

Crap Cracker
Oh yeah, might have helped if I posted the actual render code

Code:
bool D3Dclass::Render ( DxVertex *vert_list, int num_verts )
{

	int vert_size = sizeof ( DxVertex ) * num_verts;

	IDirect3DVertexBuffer9 *VertexBuffer;

	HRESULT result = D3DDevice->CreateVertexBuffer ( vert_size, 0, DxVertexType, 
													 D3DPOOL_DEFAULT, &VertexBuffer, NULL );

	if ( result != D3D_OK ) return false;

	void *verts = NULL;

	result = VertexBuffer->Lock ( 0, 0, (void**)&verts, D3DLOCK_DISCARD );

	if ( result != D3D_OK ) {

		VertexBuffer->Release ();
		return false;

	}

	memcpy ( verts, vert_list, vert_size );

	VertexBuffer->Unlock ();

	D3DDevice->SetStreamSource ( 0, VertexBuffer, 0, sizeof ( DxVertex ) );
	D3DDevice->SetFVF ( DxVertexType );
	D3DDevice->DrawPrimitive ( D3DPT_TRIANGLESTRIP, 0, 2 );

	VertexBuffer->Release ();

	return true;

}

And when I say slow I mean CRAWLING. COMPLETELY unplayable.
So I know im doing something seriously wrong.

And I did look at your source but I am so new to DirectX I cant really understand what you did. I will continue to look into it.
 

dwx

New member
you set up your whole vertex buffer for every (chip8)pixel you are drawing.
you could use "DrawPrimitiveUP" instead of DrawPrimitive ... i think it would be a lot faster if you want to access every single pixel.
it is NOT good to use DrawPrimitiveUP if you need to render static goemetry or large amounts of vertices. but in your case : just use it and test if it's faster.
if you look into the SDK docs and you understand what DrawPrimitiveUP does you will know how to use it ;-)
 

Doomulation

?????????????????????????
Well mate, see there's a lot of optimizations that can be done. DrawPrimitiveUP is probably the best. Lookie at my source and you'll see how I increased speed a little =)

EDIT:
Okay, first make sure you don't allocate new memory for each array, as allocating large heaps of memory over and over is time consuming. A hard learned lesson it was.
Second, here's little of my code:

Code:
void RenderScreen(int* _pixels,int number)
{
	if (bExit) return;
	//CUSTOMVERTEX vFiller[4];
	//CUSTOMVERTEX vFiller2[4];
	int index = 0;
	HRESULT hr;
	
	// Initialize the vertices that will fill out the borders of the screen
	index = InitScreenVertices(vScreen,_pixels,number);
	//InitFillerVertices(vFiller,vFiller2);

	if (index > 0)
	{
		pd3dDevice->BeginScene();
		pd3dDevice->Clear(0,NULL,D3DCLEAR_TARGET,0,1.0f,0);
		hr = pd3dDevice->SetStreamSource(0, pVB, 0, sizeof(CUSTOMVERTEX));
		hr = pd3dDevice->SetFVF(D3DFVF_CUSTOMVERTEX);
		hr = pd3dDevice->DrawPrimitiveUP(D3DPT_TRIANGLELIST,index/3,vScreen,sizeof(CUSTOMVERTEX));
		pd3dDevice->EndScene();

		pd3dDevice->Present(0,0,0,0);
	}
}
Though I don't know if SetStreamSource is necessary.

And my vertices function:
Code:
inline int InitScreenVertices(CUSTOMVERTEX* pVertices,int* pixels,int number)
{
	int offsetx = 0;
	int index = 0;
	int xvalue = bExtendedScreen ? 128 : 64;
	int yvalue = bExtendedScreen ? 64 : 32;

	//CUSTOMVERTEX* vertices = new CUSTOMVERTEX[ xvalue * yvalue * 3 ];
	for (int y=0; y < yvalue; y++)
	{
		for (int x=0; x < xvalue; x++)
		{
			if (screen[x+y*xvalue] != 1) continue;
			if (bMonitorDrawOpcode)
			{
				char msg[100];
				wsprintf(msg,"Pixel found at x: %i, y: %i.\n",x,y);
				DEBUGTRACE(msg);
			}

#pragma warning(disable: 4244)
			pVertices[index].x = x*XMultiplier;
			pVertices[index].y = BORDER1_END + (y*YMultiplier);
			pVertices[index+1].x = x*XMultiplier+XMultiplier;
			pVertices[index+1].y = BORDER1_END + (y*YMultiplier);
			pVertices[index+2].x = x*XMultiplier;
			pVertices[index+2].y = BORDER1_END + (y*YMultiplier+YMultiplier);

			pVertices[index+3].x = x*XMultiplier+XMultiplier;
			pVertices[index+3].y = BORDER1_END + (y*YMultiplier);
			pVertices[index+4].x = x*XMultiplier+XMultiplier;
			pVertices[index+4].y = BORDER1_END + (y*YMultiplier+YMultiplier);
			pVertices[index+5].x = x*XMultiplier;
			pVertices[index+5].y = BORDER1_END + (y*YMultiplier+YMultiplier);
#pragma warning(default: 4244)

			for (int i=0; i<6; i++)
			{
				pVertices[index+i].z = 1.0;
				pVertices[index+i].rhw = 1.0;
				pVertices[index+i].color = dwPixelColor;
			}		
			index += 6;
		}
	}
	//*pVertices = vertices;
	return index;
}
Now as you can see, just fill your vertices and use DrawPrimitiveUP for rendering.
Good luck.
 
Last edited:

dwx

New member
@doom: you don't need SetStreamSource if you just use DrawPrimitiveUP.
and if your vertex-format does not change you even don't need to call SetFVF every frame.
there are very much ways to draw the chip8 pixels in direct3d ;-)

cya dwx
 

Doomulation

?????????????????????????
Indeed I don't have to call SetFVF on every frame, but it doesn't incur performance hit and last I tried it wouldn't accept the vertices if I didn't :icecream:
 

ector

Emulator Developer
Rendering every pixel of your emulated display as a separate polygon seems a little overkill, don't you think?
A better approach in most cases would be to copy your display to a texture and draw a single quad mapped with it..
 

dwx

New member
... it's just 64x32 pixels. and they can just be set to ON or OFF. also chip8 only has to render the screen if an opcode changes is (only 1 or 2 opcodes can do that).
so worst case you have to render 64x32(=2048) quads.

but like ector said : i would use a textur or something similar.
in my little emulator i only use a GDI call (StretchDIBIts) to show my buffer, which is fast enough and only one call compared to the whole direct3d stuff ;-)

cya dwx
 
OP
glVertex3f

glVertex3f

Crap Cracker
Yeah I ve been wanting to learn DirectX for a long time so I figured I would make another Chip8 emulator and use it.

My biggest slowdown was creating/setting up the vertex buffer every call. I do stupid stuff like that.

Have any of you had problems creating the client area in a directX app? When I run my emu, a good bit of the bottom is "below" where it needs to be.
 

Doomulation

?????????????????????????
ector said:
Rendering every pixel of your emulated display as a separate polygon seems a little overkill, don't you think?
A better approach in most cases would be to copy your display to a texture and draw a single quad mapped with it..
Haha okay, you lost me. I'm not good with gfx.

glVertex: With the client area? No. Just remember that d3d automatically converts the space coordinates to client coordinates. For example, you can create a buffer of 1024 x 768 pixels on a window of only 320 x 280 pixels. d3d automatically converts them to the client coordinates when drawing. Having small buffers results in big, blurry pixels.
It might also be a bug in your code. It's not always easy to put them exactly right.

My biggest slowdown was creating/setting up the vertex buffer every call. I do stupid stuff like that.
Me too, buddy ;) I followed the d3d tutorial, and it seems it was a bad idea. Somebody told me to try DrawPrimitiveUP and damn it worked much faster then :eek:
 

zenogais

New member
Doomulation: I believe what ector means (sorry if I get this wrong) is that it would be easier to render the data to a texture and then quad-map that texture (something supported internally by DirectX) onto a square made up of two triangles.
 

Doomulation

?????????????????????????
Render it onto a texture and attach it to some pixels? =/
Sounds much more complicated than my usual rendering method.
 

ector

Emulator Developer
Nono, it's easy. I'm bored and frustrated with my latest zany project (nono don't speculate :p), so i'll do a little tutorial.
First, declare your texture:
LPDIRECT3DTEXTURE9 texture; //make this a global or something
In your init function, create it (w,h = your width, height):
d3d->CreateTexture(w,h,0,D3DUSAGE_AUTOGENMIPMAP,D3DFMT_A8R8G8B8,D3DPOOL_MANAGED,&texture,0);
Not sure if AUTOGENMIPMAP is a good idea with textures used like this but if you want it to resize down nicely and not worry about it, use it :p

Then when you need to fill it with data, such as your Chip8 display, just do:

D3DLOCKED_RECT lr;
texture->LockRect(0,&lr,0,0); // lock entire texture level 0
for (int y=0; y<h; y++)
{
unsigned int *ptr = (unsigned int *)((char *)lr.pBits + y*lr.Pitch);
for (int x=0; x<w; x++)
*ptr++ = somecolorfromyourchip8orwhatever;
}
texture->UnlockRect(0);

OK, now we want to draw it to the screen. First we need to tell D3D we want to use our texture.
d3d->SetTexture(0,texture);
Then let's draw the 2D quad. Make sure backface culling and fogging and stuff is off.

My ugly little routine for this is:
void quad2d(float x1, float y1, float x2, float y2, DWORD color, float u1, float v1, float u2, float v2)
{
struct Q2DVertex { float x,y,z,rhw; int color; float u, v; } coords[4] = {
{x1-0.5f, y1-0.5f, 0, 1, color, u1, v1},{x2-0.5f, y1-0.5f, 0, 1, color, u2, v1},
{x2-0.5f, y2-0.5f, 0, 1, color, u2, v2},{x1-0.5f, y2-0.5f, 0, 1, color, u1, v2},
};
display->SetFVF(D3DFVF_XYZRHW|D3DFVF_DIFFUSE|D3DFVF_TEX1);
display->DrawPrimitiveUP(D3DPT_TRIANGLEFAN,2,coords,sizeof(Q2DVertex));
}

Pass in 0,0,1,1 for u1,v1,u2,v2 as these are the texture coordinates that will be used in the topleft and bottomright corners.

Sorry about the missing indentation and stuff, there doesn't appear to be a good CODE tag on these forums :)
 

Doomulation

?????????????????????????
Ah, I think I get how it works. But do tell me how it's more efficient than drawing it just out of the blue? Because as I said, I'm very poor at this :p
 

ector

Emulator Developer
Well for small resolutions like chip8 it won't matter much :)
but sending a texture to the card is a lot fewer bytes to transfer and a lot faster than sending all the commands for drawing such a number of polygons.
 

Top