What's new

Nintendo 64 Consoles Specs

Elite Knight

Nintedo Fan Mad
If a admin such and so wants it removed just PM me even a normal member can warn me.

Thease arnt mine but they might come in handy for video developers what the nintendo 64 specs are in the console.

Go here

Link.

http://en.wikipedia.org/wiki/Nintendo_64
http://en.wikipedia.org/wiki/Environment_mapping
http://en.wikipedia.org/wiki/Gouraud_shading
http://en.wikipedia.org/wiki/Z-buffering
http://en.wikipedia.org/wiki/Anti-aliasing
http://en.wikipedia.org/wiki/Texture_mapping
http://en.wikipedia.org/wiki/Bilinear_filtering
http://en.wikipedia.org/wiki/Mip-mapping
http://en.wikipedia.org/wiki/Trilinear_filtering
http://en.wikipedia.org/w/index.php?title=Perspective-correct_texture_mapping&action=edit


Z-buffering

In computer graphics, z-buffering is the management of image depth coordinates in three-dimensional (3-D) graphics, usually done in hardware, sometimes in software. It is one solution to the visibility problem, which is the problem of deciding which elements of a rendered scene are visible, and which are hidden. The painter's algorithm is another common solution which, though less efficient, can also handle non-opaque scene elements. Z-buffering is also known as depth buffering.

When an object is rendered by a 3D graphics card, the depth of a generated pixel (z coordinate) is stored in a buffer (the z-buffer or depth buffer). This buffer is usually arranged as a two-dimensional array (x-y) with one element for each screen pixel. If another object of the scene must be rendered in the same pixel, the graphics card compares the two depths and chooses the one closer to the observer. The chosen depth is then saved to the z-buffer, replacing the old one. In the end, the z-buffer will allow the graphics card to correctly reproduce the usual depth perception: a close object hides a farther one. This is called z-culling.

The granularity of a z-buffer has a great influence on the scene quality: a 16-bit z-buffer can result in artifacts (called "z-fighting") when two objects are very close to each other. A 24-bit or 32-bit z-buffer behaves much better, although the problem cannot be entirely eliminated without additional algorithms. An 8-bit z-buffer is almost never used since it has too little precision.

Anti-aliasing

In digital signal processing, anti-aliasing is the technique of minimizing the distortion artifacts known as aliasing when representing a high-resolution signal at a lower resolution. Anti-aliasing is used in digital photography, computer graphics, digital audio, and many other domains.

In the image domain, aliasing artifacts can appear as wavy lines or bands, or moiré patterns, or popping, strobing, or as unwanted sparkling; in the sound domain, as rough, inharmonic, or spurious tones, or as noise.

Anti-aliasing means removing signal components that have a higher frequency than is able to be properly resolved by the recording (or sampling) device. This removal is done before (re-)sampling at a lower resolution. When sampling is performed without removing this part of the signal, it causes undesirable artifacts such as the black-and-white noise near the top of figure 1-a.

In signal acquisition and audio, anti-aliasing is often done using an analog anti-aliasing filter to remove the out-of-band component of the input signal prior to sampling with an analog-to-digital converter. In digital photography, optical anti-aliasing filters are made of birefringent materials, and smooth the signal in the spatial optical domain. The anti-aliasing filter essentially blurs the image slightly in order to reduce resolution to below the limit of the digital sensor (the larger the pixel pitch, the lower the achievable resolution at the sensor level).

See the articles on signal processing and aliasing for more information about the theoretical justifications for anti-aliasing; the remainder of this article is dedicated to anti-aliasing methods in computer graphics.

Texture mapping

A texture map is applied (mapped) to the surface of a shape, or polygon. This process is akin to applying patterned paper to a plain white box. In the pictured example, a texture map of the Earth's coloration is applied to a sphere to create the illusion of color detail that would take very many additional polygons to realize otherwise.

Multitexturing is the use of more than one texture at a time on a polygon[1]. For instance, a light map texture may be used to light a surface as an alternative to recalculating that lighting every time the surface is rendered. Another multitexture technique is bump mapping, which allows a texture to directly control the facing direction of a surface for the purposes of its lighting calculations; it can give a very good appearance of a complex surface, such as tree bark or rough concrete, that takes on lighting detail in addition to the usual detailed coloring. Bump mapping has become popular in recent video games as graphics hardware has become powerful enough to accommodate it.

The way the resulting pixels on the screen are calculated from the texels (texture pixels) is governed by texture filtering. The fastest method is to use the nearest neighbour interpolation, but bilinear interpolation is commonly chosen as good tradeoff between speed and accuracy. In the event of a texture coordinate being outside the texture, it is either clamped or wrapped.

Bilinear filtering is a texture filtering method used to smooth textures when displayed larger or smaller than they actually are.

Most of the time, when drawing a textured shape on the screen, the texture is not displayed exactly as it is stored, without any distortion. Because of this, most pixels will end up needing to use a point on the texture that's 'between' texels, assuming the texels are points (as opposed to, say, squares) in the middle (or on the upper left corner, or anywhere else; it doesn't matter, as long as it's consistent) of their respective 'cells'. Bilinear filtering uses these points to perform bilinear interpolation between the four texels nearest to the point that the pixel represents (in the middle or upper left of the pixel, usually).

Sample Code.

double getBilinearFilteredPixelColor(Texture tex, double u, double v) {
u *= tex.size;
v *= tex.size;
int x = floor(u);
int y = floor(v);
double u_ratio = u - x;
double v_ratio = v - y;
double u_opposite = 1 - u_ratio;
double v_opposite = 1 - v_ratio;
double result = (tex[x][y] * u_opposite + tex[x+1][y] * u_ratio) * v_opposite +
(tex[x][y+1] * u_opposite + tex[x+1][y+1] * u_ratio) * v_ratio;
return result;
}

Mipmap

In 3D computer graphics texture filtering, MIP maps (also mipmaps) are pre-calculated, optimized collections of bitmap images that accompany a main texture, intended to increase rendering speed and reduce artifacts. They are widely used in 3D computer games, flight simulators and other 3D imaging systems. The technique is known as mipmapping. The letters "MIP" in the name are an acronym of the Latin phrase multum in parvo, meaning "much in a small space". They need more space in memory although they form the basis of wavelet compression.

How it works

Each bitmap image of the mipmap set is a version of the main texture, but at a certain reduced level of detail. Although the main texture would still be used when the view is sufficient to render it in full detail, the renderer will switch to a suitable mipmap image (or in fact, interpolate between the two nearest, if trilinear filtering is activated) when the texture is viewed from a distance or at a small size. Rendering speed increases since the number of texture pixels ("texels") being processed can be much lower than with simple textures. Artifacts are reduced since the mipmap images are effectively already anti-aliased, taking some of the burden off the real-time renderer. Scaling down and up is made more efficient with mipmaps as well.

If the texture has a basic size of 256 by 256 pixels (textures are typically square and must have side lengths equal to a power of 2 (although this restriction does not exist in OpenGL 2.0+)), then the associated mipmap set may contain a series of 8 images, each one-fourth the size of the previous one: 128×128 pixels, 64×64, 32×32, 16×16, 8×8, 4×4, 2×2, 1×1 (a single pixel). If, for example, a scene is rendering this texture in a space of 40×40 pixels, then an interpolation of the 64×64 and the 32×32 mipmaps would be used. The simplest way to generate these textures is by successive averaging, however more sophisticated algorithms (perhaps based on signal processing and Fourier transforms) can also be used.

The increase in storage space required for all of these mipmaps is a third of the original texture, because the sum of the areas 1/4 + 1/16 + 1/64 + 1/256 + · · · converges to 1/3. (This assumes compression is not being used.) This is a major advantage to this selection of resolutions. However, in many instances, the filtering should not be uniform in each direction (it should be anisotropic, as opposed to isotropic), and a compromise resolution is used. If a higher resolution is used, the cache coherence goes down, and the aliasing is increased in one direction, but the image tends to be clearer. If a lower resolution is used, the cache coherence is improved, but the image is overly blurry, to the point where it becomes difficult to identify.

To help with this problem, nonuniform mipmaps (also known as rip-maps) are sometimes used. With a 16×16 base texture map, the rip-map resolutions would be 16×8, 16×4, 16×2, 16×1, 8×16, 8×8, 8×4, 8×2, 8×1, 4×16, 4×8, 4×4, 4×2, 4×1, 2×16, 2×8, 2×4, 2×2, 2×1, 1×16, 1×8, 1×4, 1×2 and 1×1.

The unfortunate problem with this approach is that rip-maps require four times as much memory as the base texture map, and so rip-maps have been very unpopular. Also for 1×4 and more extreme 4 maps each rotated by 45° would be needed and the real memory requirement is growing more than linearly.

To reduce the memory requirement, and simultaneously give more resolutions to work with, summed-area tables were conceived. Given a texture (tjk), we can build a summed area table (sjk) as follows. The summed area table has the same number of entries as there are texels in the texture map. Then, define

s_{mn}:=\sum _{1 \leq j \leq m,\ 1 \leq k \leq n} t_{jk}

Then, the average of the texels in the rectangle (a1,b1] × (a2,b2] is given by

s_{a_2b_2}-s_{a_1b_2}-s_{a_2b_1}+s_{a_1b_1} \over {(a_2-a_1)(b_2-b_1)}

However, this approach tends to exhibit poor cache behavior. Also, a summed area table needs to have wider types to store the partial sums sjk than the word size used to store tjk. For these reasons, there isn't any hardware that implements summed-area tables today.

A compromise has been reached today, called anisotropic mip-mapping. In the case where an anisotropic filter is needed, a higher resolution mipmap is used, and several texels are averaged in one direction to get more filtering in that direction. This has a somewhat detrimental effect on the cache, but greatly improves image quality.

Trilinear filtering is an extension of the bilinear texture filtering method, which also performs linear interpolation between mipmaps.

Bilinear filtering has several weaknesses that make it an unattractive choice in many cases: using it on a full-detail texture when scaling to a very small size causes accuracy problems from missed texels, and compensating for this by using multiple mipmaps throughout the polygon leads to abrupt changes in blurriness, which is most pronounced in polygons that are steeply angled relative to the camera.

To solve this problem, trilinear filtering interpolates between the results of bilinear filtering on the two mipmaps nearest to the detail required for the polygon at the pixel. If the pixel would take up 1/100 of the texture in one direction, trilinear filtering would interpolate between the result of filtering the 128*128 mipmap as y1 with x1 as 128, and the result of filtering on the 64*64 mipmap as y2 with x2 as 64, and then interpolate to x = 100.

Trilinear filtering

The first step in this process is of course to determine how big in terms of the texture the pixel in question is. There are a few ways to do this, and the ones mentioned here are not necessarily representative of all of them.

* Use the distance along the texture between the current pixel and the pixel to its right (or left, or above, or below) as the size of the pixel.
* Use the smallest (or biggest, or average) of the various sizes determined by using the above method.
* Determine the uv-values of the corners of the pixel, use those to calculate the area of the pixel, and figure out how many pixels of the exact same size would take up the whole texture.

Once this is done the rest becomes easy: perform bilinear filtering on the two mipmaps with pixel sizes that are immediately larger and smaller than the calculated size of the pixel, and then interpolate between them as normal.

Since it uses both larger and smaller mipmaps, trilinear filtering cannot be used in places where the pixel is smaller than a texel on the original texture, because mipmaps larger than the original texture are not defined. Fortunately bilinear filtering still works, and can be used in these situations without worrying too much about abruptness because bilinear and trilinear filtering provide the same result when the pixel size is exactly the same as the size of a texel on the appropriate mipmap.

Trilinear filtering still has weaknesses, because the pixel is still assumed to take up a square area on the texture. In particular, when a texture is at a steep angle compared to the camera, detail can be lost because the pixel actually takes up a narrow but long trapezoid: in the narrow direction, the pixel is getting information from more texels than it actually covers (so details are smeared), and in the long direction the pixel is getting information from fewer texels than it actually covers (so details fall between pixels). To alleviate this, anisotropic ("direction dependent") filtering can be used.

Perspective-correct texture mapping (keeps textures from "warping" when viewed at different angles)

That all for that read at the site for a propper detail.

Reflection mapping

In computer graphics, reflection mapping is an efficient method of simulating a complex mirroring surface by means of a precomputed texture image. The texture is used to store the image of the environment surrounding the rendered object. There are several ways of storing the surrounding environment; the most common ones are the Standard Environment Mapping in which a single texture contains the image of the surrounding as reflected on a mirror ball, or the Cubic Environment Mapping in which the envirornment is unfolded onto the six faces of a cube and stored therefore as six square textures.

This kind of approach is more efficient than the classical ray tracing approach of computing the exact reflection by shooting a ray and following its optically exact path, but it should be noted that these are (sometimes crude) approximations of the real reflection. A typical drawback of this technique is the absence of self reflections: you cannot see any part of the reflected object inside the reflection itself.

Gouraud shading

Gouraud shading, named after Henri Gouraud, is a method used in computer graphics to simulate the differing effects of light and colour across the surface of an object. In practice, Gouraud shading is used to achieve smooth lighting on low-polygon surfaces without the heavy computational requirements of calculating lighting for each pixel. Gouraud first published the technique in 1971.

The basic principle behind the method is as follows: An estimate to the surface normal of each vertex in a 3D model is found by averaging the surface normals of polygons which meet at each vertex. Using these estimates, lighting computations based on the Phong reflection model are then performed to produce colour intensities at the vertices. Screen pixel intensities can then be bilinearly interpolated from the colour values calculated at the vertices.

Gouraud shading's strengths and weaknesses lie in its interpolating. Interpolating colour values for most pixels between just a few values taken from expensive lighting calculations is much less processor intensive than performing the expensive lighting calculations for each pixel, as is done in Phong shading (not to be confused with the Phong reflection model, which is used in both the Gouraud and Phong shading). However, highly localized lighting effects (such as specular highlights eg. the glint of reflected light on the surface of an apple) will not be rendered correctly, and if a highlight lies in the middle of a polygon, but does not spread to the polygon's vertex, it will not be apparent in a Gouraud rendering; If a highlight occurs at the vertex of a polygon, it will be rendered correctly at this vertex (as this is where the lighting model is applied), but will be spread unnaturally across all neighboring polygons via the interpolation method. The problem is easily spotted in a rendering which ought to have a specular highlight moving smoothly across the surface of a model as it rotates. Gouraud shading will instead produce a highlight continuously fading in and out across neighboring portions of the model, peaking in intensity when the intended specular highlight passes over a vertex of the model.

Despite the drawbacks, Gouraud shading is considered superior to flat shading, which requires significantly less processing than Gouraud, but gives low-polygon models a sharp, faceted look.

Thats All Go To The Site For Better And More Info.
 
OP
E

Elite Knight

Nintedo Fan Mad
If they just make a plugin with thease things it might makes games work right. I still have a N64 i will take a look at it see what video card or video system it is and look it up.
 

MasterPhW

Master of the Emulation Flame
Elite Knight, why did you post these informations because for everyone they are easy accesable through wikipedia, they just have to search a little bit.
 
Last edited:

Allnatural

New member
Moderator
Your naiveté is charming. There's nothing wrong with what you're doing, but the programmers are already familier with this information, and it's not as simple as "make a plugin with these things it might makes games work right."
 
OP
E

Elite Knight

Nintedo Fan Mad
I'm learning c++ right now im a beginner.

EDIT: MasterPhW give me a break my mother just got out of the hospital and im worried that her arm wont heal right. Not getting the right sleep making me not think as good so please give me a break.
 

Doomulation

?????????????????????????
If they just make a plugin with thease things it might makes games work right. I still have a N64 i will take a look at it see what video card or video system it is and look it up.

That isn't the problem. We know the specs of the N64 and what they can do, but the problem is making it work right. A dynamic recompiler for one isn't an easy thing to write and even though we know what the machine is capable of doing, we don't know how it does it, and therein lies the challenge.
 
OP
E

Elite Knight

Nintedo Fan Mad
I know it isent as easy i'v tested a few c++ codes i did none work so well best i got was super mario 64 with desolve effect even i dont know how i got that.
 

mudlord

Banned
est i got was super mario 64 with desolve effect even i dont know how i got that

Well you could have used a plugin that already supports it via pixel shaders. Or just by using something based off Perlin noise. But still, Perlin noise in graphics is not exactly beginner material....Not to mention to even start that many special effects that the N64 can pull off are done by the framebuffer.
 
Last edited:

Top