COPYRIGHT NOTICE. COPYRIGHT 2007-2020 by Clinton Jeffery. For use only by the University of Idaho CS 428/528 class.
spring break, Covid-19, etc. |
|
With that in mind, you might consider:
About network programming, we learned
source:openglprojects.in
But really, complex shapes are usually composed of lots of triangles, organized into data structures called 3D models; more on that shortly.
If you develop in C/C++, there are two libraries that are "completely standard" OpenGL: libGL and libGLU.
Within the world coordinate system, the camera:
// in the application class public PerspectiveCamera cam; // ... in the application's create() cam = new PerspectiveCamera(67, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); cam.position.set(2, 2, 2); cam.lookAt(0, 0, 0); cam.near = 1f; cam.far = 300f; cam.update();
// in the application class public ModelBatch modelBatch; public Model model; public ModelInstance instance; // ... in the create() modelBatch = new ModelBatch(); ModelBuilder modelBuilder = new ModelBuilder(); model = modelBuilder.createSphere(2, 2, 2, 20, 20, new Material(ColorAttribute.createDiffuse(Color.YELLOW)), Usage.Position | Usage.Normal); instance = new ModelInstance(model);The code that makes models appear onscreen is given later in render().
environment = new Environment(); environment.set(new ColorAttribute( ColorAttribute.AmbientLight, 0.4f, 0.4f, 0.4f, 1f)); environment.add(new DirectionalLight().set( 0.8f, 0.8f, 0.8f, -1f, -0.8f, -0.2f));
public class MyModelTest extends ApplicationAdapter { public Environment environment; public CameraInputController camController; @Override public void create() { // ... environment code // ... camera code // ... model code camController = new CameraInputController(cam); Gdx.input.setInputProcessor(camController); }
posx,posy,posz,velx,vely,velz
Asynchronous is if all N machines just send their packets however fast they get around to it, and nothing is scheduled. Asynchronous is usually better. It takes extra coding to handle asynchronous communications.
Game developers tend to not know network coding, and want to outsource it from some black-box game-network-library. Just cause you pick a 3rd party network library does not mean things will magically be easy. They depend on, and can't do better than, the underlying OS (C) API's and their semantics. Plus, they tend to impose their own additional weirdnesses that tie you to them.
public void run() { while (true) { receive(); Thread.yield(); } }A clean separate 3-thread execution model (one thread for view/graphics, one for controller/network, and the third for model/game) isn't a bad software architecture on machines with 4+ cores. Main thing then would be how the threads communicate.
Property Name | Major Issues | WoW's Characteristics |
---|---|---|
Player Characters | avatar role progression inventory | |
World | size navigation open or fenced known or discovered | |
Non-Player Characters | aid or attack passive or aggressive interaction depth | |
Story | multiplicity depth climax | |
Quests | voluntary or conscript number timeline | |
Society | politics social standing benefits drawbacks | |
Community | guilds groups raids | |
Economy | scarcity gathered vs. crafted auctions vs. merchants | |
Violations | any breaks in immersion where the "game" falls flat |
assets = new AssetManager(); assets.load("car.g3dj", Model.class); assets.finishLoading(); model = assets.get("car.g3dj", Model.class); instance = new ModelInstance(model);
Class was cancelled on Wednesday February 5. Sorry!
public void render() { camController.update(); Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT); modelBatch.begin(cam); modelBatch.render(instance, environment); modelBatch.end(); }
[Oehlke/Nair] gives a 3D example that uses 12 model instances. Will all 12 be visible? If you have a reasonable (small) number of models, brute force is an option:
private boolean isVisible(final Camera cam, final ModelInstance instance) { Vector3 position; instance.transform.getTranslation(position = new Vector3()); BoundingBox box = instance.calculateBoundingBox(new BoundingBox()); return cam.frustum.boundsInFrustum(position, box.getDimensions()); }... allowing the render() to end up like this:
public void render() { ... modelBatch.begin(cam); for (ModelInstance instance : instances) { if (isVisible(cam, instance)) { modelBatch.render(instance, environment); } } modelBatch.end(); ... }
isVisible()
method could be called
maybeVisible()
. For example, boundsInFrustum()
doesn't know if something is in front of an object, or an object is
inside another object
Ray picking == shooting a line (ray) from camera through a point on viewport (calculatable from x,y screen coordinates by transforming them into world coordinates) and on into objects in the viewing frustum to see what the user clicked on.
In class CameraInputController's create()
method:
camController = new CameraInputController(cam) { private final Vector3 position = new Vector3(); @Override public boolean touchUp(int screenX, int screenY, int pointer, int button) { Ray ray = cam.getPickRay(screenX, screenY); for (int i = 0; i < instances.size; i++) { ModelInstance instance = instances.get(i); instance.transform.getTranslation(position); if (Intersector.intersectRayBoundsFast(ray, position, box.getDimensions())) { instances.removeIndex(i); i--; } } return super.touchUp(screenX, screenY, pointer, button); } }; Gdx.input.setInputProcessor(camController);Note that there is probably a less dorky example that would demonstrate selecting the "nearest" of the hit objects, rather than just deleting them ALL from the instances array.
------------------- ------------------- | DesktopLauncher | | AndroidLauncher | ------------------- ------------------- \ / \ / ------------ | Core | ------------ | -------------------- | Main Menu Screen | -------------------- | ---------- | | GameUI | | / ---------- -------------------- ------------- Leaderboards Screen|<--|Game Screen| -------------------- ------------- \ --------- | World | ---------
Component
classes
such as ModelComponent are supposed to be data bags with no behavior.
Despite the fact that this is in general considered a bad OO practice,
it might be justifiable as it sort of fits the
flyweight design pattern.
The full Ashley EntitySystem looks like:
public abstract class EntitySystem { public EntitySystem(); public EntitySystem(int priority); public void addedToEngine(Engine engine); public void removedFromEngine(Engine engine); public void update(float deltaTime); public boolean checkProcessing(); public void setProcessing(boolean processing); }Basically, in addition to an update(deltaTime), entity systems will have an insert/delete on an engine, a setter and gett for a boolean flag
processing
, and an optional priority.
BulletPhysics has gone through a lot of different versions, and the above link might not be for the same version of Bullet you have, so although MotionStates info may be the same for whatever version you've got, you might want to check.
models = new Array<Model>(); modelbuilder = new ModelBuilder(); // creating a ground model using box shape float groundWidth = 40; modelbuilder.begin(); MeshPartBuilder mpb = modelbuilder.part("parts", GL20.GL_TRIANGLES, Usage.Position | Usage.Normal | Usage.Color, new Material(ColorAttribute.createDiffuse(Color.WHITE))); mpb.setColor(1f, 1f, 1f, 1f); mpb.box(0, 0, 0, groundWidth, 1, groundWidth); Model model = modelbuilder.end(); models.add(model); groundInstance = new ModelInstance(model); // creating a sphere model float radius = 2f; final Model sphereModel = modelbuilder.createSphere( radius, radius, radius, 20, 20, new Material(ColorAttribute.createDiffuse(Color.RED), ColorAttribute.createSpecular(Color.GRAY), FloatAttribute.createShininess(64f)), Usage.Position | Usage.Normal); models.add(sphereModel); sphereInstance = new ModelInstance(sphereModel); sphereinstance.transform.trn(0, 10, 0);
Nair's explanations come in a later section, after the code is presented. You might want to skip the code section and read the explanation first.
What are you supposed to see here?
private btDefaultCollisionConfiguration collisionConfiguration; private btCollisionDispatcher dispatcher; private btDbvtBroadphase broadphase; private btSequentialImpulseConstraintSolver solver; private btDiscreteDynamicsWorld world; private Array<btCollisionShape> shapes = new Array<btCollisionShape>(); private Array<btRigidBodyConstructionInfo> bodyInfos = new Array<btRigidBody.btRigidBodyConstructionInfo>(); private Array<btRigidBody> bodies = new Array<btRigidBody>(); private btDefaultMotionState sphereMotionState;
entities = e.getEntitiesFor(Family.all(ModelComponent.class).get())In Ashley, this fetches the entities within Engine e that contain a ModelComponent. The method "all" is normally used with multiple parameters to fetch entities that have several specified components. Family.one and Family.exclude are other common filters appled to entities.
// Initiating Bullet Physics Bullet.init(); //setting up the world collisionConfiguration = new btDefaultCollisionConfiguration(); dispatcher = new btCollisionDispatcher(collisionConfiguration); broadphase = new btDbvtBroadphase(); solver = new btSequentialImpulseConstraintSolver(); world = new btDiscreteDynamicsWorld(dispatcher, broadphase, solver, collisionConfiguration); world.setGravity(new Vector3(0, -9.81f, 1f)); // creating ground body btCollisionShape groundshape = new btBoxShape(new Vector3(20, 1 / 2f, 20)); shapes.add(groundshape); btRigidBodyConstructionInfo bodyInfo = new btRigidBodyConstructionInfo(0, null, groundshape, Vector3.Zero); this.bodyInfos.add(bodyInfo); btRigidBody body = new btRigidBody(bodyInfo); bodies.add(body); world.addRigidBody(body); // creating sphere body sphereMotionState = new btDefaultMotionState(sphereInstance.transform); sphereMotionState.setWorldTransform(sphereInstance.transform); final btCollisionShape sphereShape = new btSphereShape(1f); shapes.add(sphereShape); bodyInfo = new btRigidBodyConstructionInfo(1, sphereMotionState, sphereShape, new Vector3(1, 1, 1)); this.bodyInfos.add(bodyInfo); body = new btRigidBody(bodyInfo); bodies.add(body);
world.stepSimulation(Gdx.graphics.getDeltaTime(), 5, 1/60.0f); sphereMotionState.getWorldTransform(sphereInstance.transform);
public class MyContactListener extends ContactListener { @Override public void onContactStarted(btCollisionObject colObj0, btCollisionObject colObj1) { Gdx.app.log(this.getClass().getName(), "onContactStarted"); } }and in your game class's create():
MyContactListener contactListener = new MyContactListener();
import org.json.*;
worked
new JSONObject(s)
obj.getJSON*(key)
methods to fetch elements,
such as getJSONObject()
, getJSONArray()
,
getJSONLong()
...
"texture": "csacwalls.gif", "floor": {"class": "Quad", "texture": "csaccarpet.gif"}
"walls": "csacwalls.gif", "floor": "csaccarpet.gif",or
"walls": {"class": "Quad", "texture": "csacwalls.gif"}, "floor": {"class": "Quad", "texture": "csaccarpet.gif"},
Obsolete Protocols | Live Protocols | ||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
Calls like
wall = modelBuilder.createBox(...)can be replaced by
modelBuilder.begin(); modelBuilder.setUVRange(0, 0, repeatX, repeatY); modelBuilder.part("box", GL10.GL_TRIANGLES, attributes, material).box(width, height, depth); wall = modelBuilder.end();Where did the material come from? Instead of a color material, try a textured material:
Texture walls = new Texture(Gdx.files.internal("Objects/walls.jpg")); walls.setFilter(TextureFilter.Linear, TextureFilter.Linear); walls.setWrap(Texture.TextureWrap.Repeat, Texture.TextureWrap.Repeat); material = new Material(TextureAttribute.createDiffuse(walls));
Lecture #16 was a guest lecture overview of Blender 3D model operations.
Lecture #17 was a guest lecture overview of Blender texturing-by-painting, UV mapping, and the like.
More broadly, I am trying to incrementally get us from Di Guiseppe one-player FPS to MMO-style multi-player. There are architecture questions.
Java | Unicon |
---|---|
/* * Simple Java TCP Server. Adapted from * https://systembash.com/a-simple-java-tcp-server-and-tcp-client/ * which in turn is from "Computer Networking" by Kurose and Ross. */ import java.io.*; import java.net.*; class TCPServer { public static void main(String argv[]) throws Exception { String line; String capitalizedSentence; ServerSocket welcomeSocket = new ServerSocket(6789); while (true) { Socket n = welcomeSocket.accept(); BufferedReader inFromClient = new BufferedReader( new InputStreamReader(n.getInputStream())); DataOutputStream outToClient = new DataOutputStream(n.getOutputStream()); line = inFromClient.readLine(); System.out.println("Received: " + line); System.out.flush(); capitalizedSentence = line.toUpperCase() + '\n'; outToClient.writeBytes(capitalizedSentence); } } } |
procedure main() repeat { if not (n := open(":6789", "na")) then stop("server: no socket") while line := read(n) do { write("Received: ", line) write(n, map(line, &lcase, &ucase)) } } end |
Simple Java Client | Unicon |
---|---|
/* * Simple Java TCP Client. Adapted from * https://systembash.com/a-simple-java-tcp-server-and-tcp-client/ * which is from "Computer Networking" by Kurose and Ross. */ import java.io.*; import java.net.*; class TCPClient { public static void main(String argv[]) throws Exception { String sentence; String modifiedSentence; BufferedReader inFromUser = new BufferedReader(new InputStreamReader(System.in)); Socket clientSocket = new Socket("localhost", 6789); DataOutputStream outToServer = new DataOutputStream(clientSocket.getOutputStream()); BufferedReader inFromServer = new BufferedReader(new InputStreamReader(clientSocket.getInputStream())); sentence = inFromUser.readLine(); outToServer.writeBytes(sentence + '\n'); modifiedSentence = inFromServer.readLine(); System.out.println("FROM SERVER: " + modifiedSentence); clientSocket.close(); } } |
procedure main() if not (n := open("localhost:6789","n")) then stop("no socket") while line := read() do { write(n, line) write(read(n)) } end |
Consider this a discussion of TCP to complement [S/O Ch. 4] which only discusses UDP.
Options for going multi-user:
What would you have to do to make it more completely fit?
select()
and non-blocking I/O.
You have to ask the operating system to put a socket in non-blocking I/O mode.
C | Java |
---|---|
if ((new_fd = accept(sockfd, (struct sockaddr *)&their_addr, \ &sin_size)) == -1) { perror("accept"); } fcntl(last_fd, F_SETFL, O_NONBLOCK); /* Change the socket into non-blocking state */ fcntl(new_fd, F_SETFL, O_NONBLOCK); /* Change the socket into non-blocking state */ |
//... after an accept(), on a SocketChannel sc.configureBlocking(false); |
select()
select()
can check them all for pending input.
select()
will let you service many connections
from a single thread...so long
as you do not block waiting for input from any of them.
read()
to make optimal use of select()
(why?).
C select() | java select() |
---|---|
int select(int maxfd, fd_set *readsset, fd_set *writeset, fd_set *exceptset, const struct timeval *timeout);Returns: positive count of descriptors ready, 0 on timeout, -1 error Arguments:
|
Selector selector = Selector.open(); ServerSocketChannel ssChannel = ServerSocketChannel.open(); ssChannel.configureBlocking(false); ssChannel.socket().bind(new InetSocketAddress(hostIPAddress, port)); ssChannel.register(selector, SelectionKey.OP_ACCEPT); while (true) { if (selector.select() <= 0) { continue; } processReadySet(selector.selectedKeys()); } ... public static void processReadySet(Set readySet) throws Exception { Iterator iterator = readySet.iterator(); while (iterator.hasNext()) { SelectionKey key = (SelectionKey) iterator.next(); iterator.remove(); if (key.isAcceptable()) { // ... go ahead and do an ssChannel.accept() // which gives you a SocketChannel, not a socket } if (key.isReadable()) { // ...key.channel() gives you SocketChannel // ...do a non-blocking read from SocketChannel } |
select()
in the
traditional separate-process-per-user server model
Some notes are here.
For a multi-user game, how much work goes in the client, and how much in the server?
grade distribution:
94 92 90 89 89 --------------------------- A 87 86 82 82 --------------------------- B 70
Best answers on the 3D vs. 2D question managed to say something more deep than "3D is more difficult to program than 2D", although that is certainly true. It was good if you noted that the mechanics in 3D were often more complicated, and that 3D often spends a much higher percentage of its budget on assets. Several of you noted that many game genres can use either 2D or 3D, but some genres are tied to one or the other. For example, it might be hard to imagine a 3D side-scroller, or a 2D first person shooter (although there are lots of 2D "shooters").
Best answers on multi-user vs. single-user managed to say something more deep than "multi-user is more difficult to program than single-user". Some folks said, or almost said, that playing with (or against) other humans reduces the need in multi-user games for good AI -- because the other humans constitute nonartificial intelligence.
\login -cvecypherusername password
\newuser username password FirstName LastName email affiliation
\transfer filename filesize server
\logout
\login -cvecypherusername password
\version 8.9
\users
\setip
\checkforupdates n
\updatelocations username
\back Online
(twice?!)
\back
command is part of the AFK system, which
allows other clients to know when you are AFK.
\latency n
\move username body x y z a
(often at least two in a packet)
\move username part right_arm fb 10
part
moves apply transformations to rigging/bones under
program control. In this example, set the right arm rotation to 10 degrees
\updateMode username 3D
\success
new user creation succeeded
\request filename filesize server port
\checkforupdates
, the server responded with a packet
with no command at the beginning! It contained a copy of the
avatar file for the new user, in a mangled string with
format filename\tcontents
and newlines represented by $
.
It then sent a line consisting of BREAK.
method run()
select( socket_list, timeout )
ready(socket)
write(socket, s)
record SocketDriver
, which should be a class, tracks
a single connection, what user it is, and its buffered input and output
cve/ cve/bin - location of executable programs after compile/link cve/dest - build tools; needs updating cve/src - project source code cve/src/client - cve updater/login tool and main client cve/src/common - code used in both client and server cve/src/ide - code for collaborative IDE, part of client cve/src/model - code for virtual objects and behavior cve/src/npc - code for computer-controlled characters/bots cve/src/server - the main CVE server codeTo see how the client interacts with messages received from the server, look at the CVE src/client directory, especially the dispatcher, an object that takes inputs from multiple sources and sends them as events (method calls) to appropriate objects.
turn on the recording, Dr. J
Steed's Slides
turn on the recording, Dr. J
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk
Steed's Ch. 11, part 2
Consider the ones we didn't get to last time.
Fundamentals and 5+ minute videos:
If you don't want to learn how to make a rig yourself, or want to create a much more advanced rig, use the Rigify addon: Very short / quick videos (the dude is kind of over the top and annoying, but does give information pretty concisely):
Although a naive generic definition of scalability in multi-user games might be: "handling more users", Steed thinks we can do better than that.
Steed's Ch. 12 Slides
We are just looking for interesting stuff in the slides that I might have skipped, or figures that help illustrate selected topics that I covered.
We did Steed's Ch. 12 Slides 1-15 or so.
\death target
- do you have messages to indicate
damage and death? What all is needed around those features?
\fire
\damage target nto reduce the health of target by n points.
Move monster to server. Need to separate that out from current client code.
Server sends out positions. Client reports contact, only with its own player: \hit player monster Server calculates damage based on monster. Server sounds out damage messages in response to client contact. \damage target n
Steed's Slides
Name | Ports |
---|---|
atki7828 | 4500-4501 |
demp5996 | 4502-4503 |
foss6583 | 4504-4505 |
mamm9263 | 4506-4507 |
mitc9543 | 4508-4509 |
jkneill | 4510-4511 |
scha1331 | 4512-4513 |
shi2145 | 4514-4515 |
whit8285 | 4516-4517 |
rwhitfield | 4518-4519 |
Steed's Chapter 13 Slides, but today I am only going to do Slides 1-5, and Friday none of them. More from Chapter 13 next week.
After starting with all rectangular box shapes, CVE was extended with two primitives to allow for uneven terrain: "ramps", ... and "heightfields".
Room { name atrium x 70.90000000000001 y 0 z 23.3 w 13 h 4.05 l 9.199999999999999 texture dat/nmsu/texsmall/wall2.gif floor Wall { texture dat/nmsu/textures/floor.gif } ceiling Wall { texture dat/textures/ceiling.gif } obstacles [ Ramp { coords [70.76, 0, 32.5] color pink texture dat/nmsu/texsmall/blue_tile.gif type 3 width 3.1 height 1 length 13.2 numsteps 5 } Ramp { coords [77.2, 0, 29.4] color pink texture dat/nmsu/texsmall/blue_tile.gif type 3 width 6.2 height 1 length 7.3 numsteps 5 } Ramp { coords [74.3, 0, 27.5] color pink texture dat/nmsu/texsmall/blue_tile.gif type 4 width 2.8 height 1 length 4.17 numsteps 5 } Ramp { coords [74.8, 0, 27.3] color pink texture dat/nmsu/texsmall/blue_tile.gif type 1 width 2 height 1 length 4 numsteps 5 } ] ...
Now for some Highlights from Di Giuseppe Chapter 6
Reading
First, photos:
Room { NAME blahblah ... obstacles [ ... HeightField { coords [26 , 12.3, 18.9] tex grass.png width 2 length 2 heights [ [0, 0, 0], [0, 0.5, 0], [0, 0, 0] ] } ] }Discussion of HeightField.icn:
# # A HeightField is a non-flat piece of terrain, analogous to a Ramp. # It has a "world-coordinate embedding", within which it plots a grid # of varying heights, rendered using a regular mesh of triangles. # See [Rabin2010], Chapter 4.2, Figure 4.2.11. Compared with the Rabin # examples, we use a particular "alternating diagonal" layout: # # V-V-V # |\|/| # V-V-V # |/|\| # V-V-V # # Call the rectangular surface regions between adjacent vertices "cells". # Let HF be a list of list of heights. There are in fact *HF-1 cell rows, # and *(HF[1])-1 cell columns. In the above example *HF=3, *(HF[1])=3, and # although the heightfield matrix is a 3x3, the cell matrix is 2x2. The # cell row length is length/(*HF-1) and the # cell column width is width/(*(HF[1])-1). # # Vertex Vij, where i is the row and j is the column, starting at 0, is given # by x+(i*cell_column_width),y+HF[i+1][j+1],z+(j*cell_row_length). # class HeightField : Obstacle( x, y, z, # base position width, length, # x- and z- extents HF # list of lists of y-offsets. rows, columns, row_length, column_width # derived/internal ) # # assumes 0-based subscripts # method calc_vertex(i,j) return [x+i*column_width, y+HF[i+1][j+1], z+(j*row_length)] end method render(render_level) every row := 2 to *HF do { every col := 2 to *(HF[1]) do { v1 := calc_vertex(col-1,row-1) v2 := calc_vertex(col-1,row) v3 := calc_vertex(col,row) v4 := calc_vertex(col,row-1) # there are two cases, triangle faces forward (cell row+col even) # and triangle faces backward (cell row+col odd) if row+col %2 = 0 then { # render a triangle facing the previous column FillPolygon(v1 ||| v2 ||| v3) FillPolygon(v1 ||| v3 ||| v4) } else { # render a triangle facing the next column FillPolygon(v1 ||| v2 ||| v4) FillPolygon(v2 ||| v3 ||| v4) } } } end initially(coords,w,h,l,tex) HF := list(3) every !HF := list(3, 0.0) HF[2,2] := 0.5 rows := *HF-1, columns := *(HF[1])-1 row_length := length/rows, column_width := width/columns endAnd: discussion of procedural generating of a heightfield
procedure main() every z := 0 to 5 do { every x := 0 to 20 do { writes(trun(hf(x,z)), " ") } write() } end procedure hf(x:real,z:real) return (log(x+1,2) + z/5.0 * (1-x/20)) * 1.5 / 4.39 end procedure trun(r) return left(string(r) ? tab((find(".")+3)|0), 5) end
s (115) + 62 = 177, hex B1 y (121) + 62 = 183, hex B7 s (115) + 62 = 177, hex B1 t (116) + 62 = 178, hex B2 e (101) + 62 = 163, hex A3 m (109) + 62 = 171, hex ABBecause the CVE session transcript was printing out strings using Unicon's "image" function, which prints upper ascii characters in hex format, this string in the transcript looked approximately like "\xb1\x00\xb7\x00\xb1\x00\xb2\x00\xa3\x00\xab\x00" (i.e. it was the "secret" login name used to do a system login and create a new user account).
Implementing cypher(s) in C would be trivial, is it as trivial in Java? (yes, see cypher.java).
cve/doc - project documentation source; several subdirectories cve/dat cve/dat/3dmodels/ - basic support for models in .s3d and .x format cve/dat/help - command reference and user guide PDF cve/dat/images - non-textures such as logos cve/dat/images/letters - textures images for alphanumeric in-game text cve/dat/newsfeed - in-game forum-like asynchronous text boards cve/dat/nmsu - NMSU Science Hall level; many subdirs cve/dat/projects/ - in-game software project spaces cve/dat/scratch - scratch space cve/dat/sessions - in-game collaborative IDE sessions cve/dat/textures - common textures that may be used in all levels cve/dat/uidaho/ - UIdaho Janssen Engineering Building cve/dat/users - user accounts not tied to a particular server/level
#@ Avatar property file generated by amaker.icn #@ on: 07:17:49 MST 2018/03/22 NAME=spock GENDER=m HEIGHT=0.6 XSIZE=0.7 YSIZE=0.7 ZSIZE=0.7 SKIN COLOR=white SHIRT COLOR=white PANTS COLOR=white SHOES COLOR=white HEAD SHAPE=1 SHAPE=human FACE PICTURE=spock.gif Privacy=Everyone
There is one other kind of raw data I'd like you to collect: yourselves. I want to push beyond the crude avatars I've used previously, and model ourselves in crude, low-polygon textured glory.
To anyone who gets confused about a positive axis running from right to left
or from top to bottom instead of what you were expecting: this just means
that your perspective is turned around from those used by the world
coordinates. If your character rotates 180 degrees appropriately, suddenly
positive values go the opposite direction from before and what was right to
left is the more familiar left to right. The point: world coordinates are
different from your personal eyeball coordinate system, don't confuse them.
For your room, we need to:
Assigning a value to a variable uses := in Unicon. Most of the rest of
the numeric operators and computation is the same as in any programming
language. In 3D graphics, a lot of real numbers are used.
In the jeb1 demo, the
user controls a moving camera, which has an (x,y,z) location, an
(x,y,z) vector describing where they are looking, relative to their
current position, and camera angles up or down. Initial values of
posx and posy are "in the middle of the first room in the model".
The rooms have min and max values for x,y, and z so:
Unicon has a "list" data type for storing an ordered collection of items.
There is a global variable named Room which holds such a list, a list of
Room() objects. We will discuss Room() objects in a bit. This is how
the list of rooms is created, with 0 elements:
The code that actually reads a model file and creates Room() objects
and puts them on the Rooms list is procedure make_model(). We defer
the actual parsing discussion to later or elsewhere.
The following line steps through all the elements of the Rooms list,
and tells each Room() to draw itself. The exclamation point is an
operator that generates each element from the list if the surrounding
expression requires it. The "every" control structure requires every
result the expression can produce. The
Answer: in HW4 such a thing might be omitted, but you might get around
to including the column as an obstacle (a virtual Box) in your .dat file.
The obstacles section is also where things like bookshelves and tables
might go.
The parser is
a handwritten in a vaguely recursive descent style; at syntax levels where
many fields can be read, it builds a table (so order does not matter)
from which we populate an object's fields. So the table t's fields correspond
to what the file had in it, and the .coords here is the list of vertices
which the object in the model wants.
Events may be strings (for keyboard characters), but
most are small negative integer codes, with symbolic names such as
Key_Up defined in keysyms.icn.
Wall() is the simplest class here, it is just a textured polygon,
holding a texture value and a list of x,y,z coordinates, and
providing a method render(). Every object in the CVE's model
will provide a method render().
Class Box() is more interesting, it is a rectangular area with walls
that one cannot walk through, and a bounding box for collision detection.
Doors, openings, and other exceptions are special-cased by subclassing
and overriding default behavior. Rectangular areas are singled out
because they are common and have easy collision detection; when a wall
goes from floor to ceiling, collision detection reduces to a 2D
problem.
Box() has methods:
Class Door() is not just a graphical object, it is a connection between (2)
rooms, which can be open (1.0) or closed(0.0) or in between. It supports
methods:
model.icn
The only part we had time to look at today was the top of class Room:
A Common Coordinate System
For this class, we will use a standard/common coordinate system: 1.0 units =
1 meter, with an origin (0.0,0.0,0.0) in the northwest corner at ground
level. Y grows "up", X grows east, and Z grows south. The entire building
(except anything below ground level as viewed from the northwest corner)
will have fairly small positive real numbers in the model. This coordinate
system is referred to as FHN world coordinates (FHN=Frank Harary
Normal). Frank Harary was a graph theorist friend I knew in New Mexico.
The coordinate system is named after him because (0,0,0) was at one time
the corner of his office.
Room Modeling
For simplicity's sake, a room will consist of one or more rectangular
areas, each bounded by floor, ceiling, walls, and doors or openings into
other rectangular areas. Fortunately or unfortunately for you, we will
use the term Room to denote these rectangular areas. Within each room
are 0 or more obstacles and decorations. Obstacles are things like tables
and chairs, computers and printers. Decorations are things like signs
and posters that do not affect movement.
measure its x and z using FHN,
Another Sample Room
Taken from NMSU's virtual CS department we have the following. It is for
a ground floor room (y's are 0). This example has both obstacles and
decorations.
Room {
name SH 167
x 29.2
y 0
z 0.2
w 6
h 3.05
l 3.7
floor Wall {
texture floor2.gif
coords [29.2,0,0.2, 29.2,0,3.9, 35.2,0,3.9, 35.2,0,0.2]
}
obstacles [
Box { # column
Wall {coords [34.3,0,0.2, 34.3,3.05,0.2, 34.3,3.05,0.6, 34.3,0,0.6]}
Wall {coords [34.3,0,0.6, 34.0,0,0.6, 34.0,3.05,0.6, 34.3,3.05,0.6]}
Wall {coords [34.0,0,0.6, 34.0,3.05,0.6, 34.0,3.05,0.2, 34.0,0,0.2]}
}
Box { # window sill
Wall {coords [29.2,0,0.22, 29.2,1.0,0.22, 35.2,1.0,0.22, 35.2,0,0.22]}
Wall {coords [29.2,1,0.22, 29.2,1.0,0.2, 35.2,1.0,0.2, 35.2,1,0.22]}
}
Chair {
coords [31.2,0,1.4]
position 0
color red
type office
movable true
}
Table {
coords [31.4,0,2.4]
position 180
color very dark brown
type office
}
]
decorations [
Wall { # please window
texture wall2.gif
coords [29.2,1.0,0.22, 29.2,3.2,0.22, 35.2,3.2,0.22, 35.2,1.0,0.22]
}
Wall { # whiteboard
texture whiteboard.gif
coords [29.3,1.0,3.7, 29.3,2.5,3.7, 29.3,2.5,0.4, 29.3,1.0,0.4]
}
Windowblinds {
coords [29.2,1.5,0.6]
angle 90
crod blue
cblinds very dark purplish brown
height 3.05
width 6
}
]
}
Diving into Jeb1
File Organization Comes First.
It appears we have two source files, and a few images (.gif) and model
(.dat) files.
jeb1: jeb1.icn model.u
unicon jeb1 model.u
model.u: model.icn
unicon -c model
jeb1.zip: jeb1.icn model.icn
zip jeb1.zip jeb1.icn model.icn *.gif *.dat makefile README
jeb1.icn
A Walk Through a Smattering of Jeb1
Most attributes can be changed afterwards using function WAttrib(),
which takes as many attributes as you like. The following line
enables texture mapping in the window:
WAttrib("texmode=on")
posx := (r.minx + r.maxx) / 2
posy := r.miny + 1.9
posz := (r.minz + r.maxz) / 2
lookx := posx; looky := posy -0.15; lookz := 0.0
Rooms := [ ]
procedure make_model(corridor)
local fin, s, r
fin := open(modelfile) | stop("can't open model.dat")
while s := readlin(fin) do s ? {
if ="#" then next
else if ="Room" then {
r := parseroom(s, fin)
put(world.Rooms, r)
world.RoomsTable[r.name] := r
if /posx then {
# ... no posx defined, calculate posx/posy per earlier code
}
}
else if ="Door" then parsedoor(s,fin)
else if ="Opening" then parseopening(s,fin)
# else: we didn't know what to do with it, maybe its an error!
}
close(fin)
end
The "rooms" in jeb.dat are JEB230, JEB 228 and the corridor
immediately outside. Each room is created by a constructor procedure,
and inserted into both a list and a table for convenience access. (Do
we need the list? Maybe not! Tables are civilization!)
.render()
calls
a method render() on an object (in this case, on each object in turn as
it is produced by !). Note that our CVE will probably
want to get smart about only drawing those rooms that are "visible",
in order to scale performance.
every (!Rooms).render()
Aspects of the .dat file format
Obstacles
Student question for the day: my room is not a perfect rectangle, it has
an extra column jutting out on one of the walls, what do I do?
obstacles [
Box { # column
Wall {coords [34.3,0,0.2, 34.3,3.05,0.2, 34.3,3.05,0.6, 34.3,0,0.6]}
Wall {coords [34.3,0,0.6, 34.0,0,0.6, 34.0,3.05,0.6, 34.3,3.05,0.6]}
Wall {coords [34.0,0,0.6, 34.0,3.05,0.6, 34.0,3.05,0.2, 34.0,0,0.2]}
}
]
Ceilings?
Originally there was no special ceiling syntax; before now a
person had to put a decoration up there in order to change the ceiling.
However, ceilings are very much like floors, so I went into model.icn
procedure parseroom() and added the following after the floor code.
Hopefully it is there now in the public version.
if \ (t["ceiling"]) then {
t["ceiling"].coords := [t["x"],t["y"]+t["h"],t["z"],
t["x"],t["y"]+t["h"],t["z"]+t["l"],
t["x"]+t["w"],t["y"]+t["h"],t["z"]+t["l"],
t["x"]+t["w"],t["y"]+t["h"],t["z"]]
t["ceiling"].set_plane()
r.ceiling := t["ceiling"]
}
This allows us to say declarations like this in .dat files:
ceiling Wall {
texture jeb230calendar.gif
}
Event Handling
Most programs that open a window or use a graphical user interface
are event-driven meaning that the program's main job is to
sit around listening for user key and mouse clicks, interpreting them
as instructions, and carrying them out. Pending() returns a list of
events waiting to be processed. Event() actually returns the key or
mouse event. For a simple demo program, one could code the event
processing loop oneself, something like the following.
repeat {
if *Pending() = 0 then { # continue in current direction, if any }
else {
case ev := Event() of {
Key_Up: cam_move(xdelta := 0.05) # Move Foward
... other keys
}
}
$include "keysyms.icn"
The Jeb1 demo isn't this simple, since it embeds the 3D window in a Unicon
GUI interface. Events will be discussed in more detail below; for now it is
enough to say that they just modify the camera location and tell the scene
to redraw itself. cam_move() checks for a collision and if not, it updates
the global variables (e.g. posx,posy,posz). After the cam_move(), function
Eye(x,y,z,lx,ly,lz) sets the camera position and look direction.
jeb1 is a Unicon GUI application. The GUI owns the control flow and calls
a procedure when an interesting event happens. In Unicon terminology, a
Dispatcher runs the following loop until the program exits. The key call
is select(), which tells is which input sources have an event(s) for us.
method message_loop(r)
local L, dialogwins, x
connections := []
dialogwins := set()
every insert(dialogwins, (!dialogs).win)
every put(connections, !dialogwins | !subwins | !nets)
while \r.is_open do {
if x := select(connections,1)[1] then {
if member(subwins, x) then {
&window := x
do_cve_event()
}
else if member(dialogwins, x) then do_event()
else if member(nets, x) then do_net(x)
else write("unknown selector ", image(x))
# do at least one step per select() for smoother animation
do_nullstep()
}
else do_validate() | do_ticker() | do_nullstep() | delay(idle_sleep)
}
end
do_event() calls the normal Unicon GUI callbacks for the menus, textboxes,
etc. do_cve_event() is a GUI handler for keys in the 3D subwindow.
method do_cve_event()
local ev, dor, dist, closest_door, closest_dist, L := Pending()
case ev := Event() of {
Key_Up: {
xdelta := 0.05
while L[1]===Key_Up_Release & L[4]===Key_Up do {
Event(); Event(); xdelta +:= 0.05
}
cam_move(xdelta) # Move Foward
}
Key_Down: {
xdelta := -0.05
while L[1]===Key_Down_Release & L[4]===Key_Down do {
Event(); Event(); xdelta -:= 0.05
}
cam_move(xdelta) # Move Backward
}
Key_Left: {
ydelta := -0.05
while L[1]===Key_Left_Release & L[4]===Key_Left do {
Event(); Event(); ydelta -:= 0.05
}
cam_orient_yaxis(ydelta) # Turn Left
}
Key_Right: {
ydelta := 0.05
while L[1]=== Key_Right_Release & L[4] === Key_Right do {
Event(); Event(); ydelta +:= 0.05
}
cam_orient_yaxis(ydelta) # Turn_Right
}
"w": looky +:= (lookdelta := 0.05) #Look Up
"s": looky +:= (lookdelta := -0.05) #Look Down
"q": exit(0)
"d": {
closest_door := &null
closest_dist := &null
every (dor := !(world.curr_room.exits)) do {
if not find("Door", type(dor)) then next
dist := sqrt((posx-dor.x)^2+(posz-dor.z)^2)
if /closest_door | (dist < closest_dist) then {
closest_door := dor; closest_dist := dist
}
}
if \closest_door then {
if \ (closest_door.delt) === 0 then {
closest_door.start_opening()
}
else closest_door.done_opening()
closest_door.delta()
}
}
-166 | -168 | (-(Key_Up|Key_Down) - 128) : xdelta := 0
-165 | -167 | (-(Key_Left|Key_Right) - 128) : ydelta := 0
-215 | -211 : lookdelta := 0
}
Eye(posx,posy,posz,lookx,looky,lookz)
end
The Line Between jeb1.icn and model.icn
This program is a rapid prototype to test a concept. Originally it
was a single file (a single procedure!), but after the initial
demo (by Korrey Jacobs, adapted by Ray Lara, both at NMSU)
proved the concept, Dr. J started reorganizing it into two categories,
the code providing the underlying modeling capabilities (model.icn)
and the code providing the user interface (jeb1.icn). The
dividing line is imperfect, we might want to move some code from
one file into the other.
model.icn
We may as well start with the larger of the two source files.
model.icn is intended to be usable for any cve, not just the UI
CS department CVE. It defines classes Door, Wall, Box, and Room,
where Room is a subclass of Box.
class Wall(texture, coords)
method render()
if current_texture ~=== texture then {
WAttrib("texture="||texture, "texcoord=0,0,0,1,1,1,1,0")
current_texture := texture
}
(FillPolygon ! coords) | write("FillPolygon fails")
end
initially(t, c[])
texture := t
coords := c
end
The full code of jeb1.icn
import gui
$include "guih.icn"
class Untitled : Dialog(chat_input, chat_output, text_field_1, subwin)
method component_setup()
self.setup()
end
method end_dialog()
end
method init_dialog()
end
method on_exit(ev)
write("goodbye")
exit(0)
end
method on_br(ev)
end
method on_kp(ev)
end
method on_mr(ev)
end
method on_subwin(ev)
write("subwin")
end
method on_about(ev)
local sav
sav := &window
&window := &null
Notice("jeb1 - a 3D demo by Jeffery")
&window := sav
end
method on_chat(ev)
chat_output.set_contents(put(chat_output.get_contents(), chat_input.get_contents()))
chat_output.set_selections([*(chat_output.get_contents())])
chat_input.set_contents("")
end
method setup()
local exit_menu_item, image_1, menu_1, menu_2, menu_bar_1, overlay_item_1, overlay_set_1, text_menu_item_2
self.set_attribs("size=800,750", "bg=light gray", "label=jeb1 demo")
menu_bar_1 := MenuBar()
menu_bar_1.set_pos("0", "0")
menu_bar_1.set_attribs("bg=very light green", "font=serif,bold,16")
menu_1 := Menu()
menu_1.set_label("File")
exit_menu_item := TextMenuItem()
exit_menu_item.set_label("Exit")
exit_menu_item.connect(self, "on_exit", ACTION_EVENT)
menu_1.add(exit_menu_item)
menu_bar_1.add(menu_1)
menu_2 := Menu()
menu_2.set_label("Help")
text_menu_item_2 := TextMenuItem()
text_menu_item_2.set_label("About")
text_menu_item_2.connect(self, "on_about", ACTION_EVENT)
menu_2.add(text_menu_item_2)
menu_bar_1.add(menu_2)
self.add(menu_bar_1)
overlay_set_1 := OverlaySet()
overlay_set_1.set_pos(6, 192)
overlay_set_1.set_size(780, 558)
overlay_item_1 := OverlayItem()
overlay_set_1.add(overlay_item_1)
overlay_set_1.set_which_one(overlay_item_1)
self.add(overlay_set_1)
subwin := Subwindow3D()
subwin.set_pos(14, 195)
subwin.set_size("767", "551")
subwin.connect(self, "on_subwin", ACTION_EVENT)
subwin.connect(self, "on_br", BUTTON_RELEASE_EVENT)
subwin.connect(self, "on_mr", MOUSE_RELEASE_EVENT)
subwin.connect(self, "on_kp", KEY_PRESS_EVENT)
self.add(subwin)
chat_input := TextField()
chat_input.set_pos("12", "162")
chat_input.set_size("769", "25")
chat_input.set_draw_border()
chat_input.set_attribs("bg=very light green")
chat_input.connect(self, "on_chat", ACTION_EVENT)
chat_input.set_contents("")
self.add(chat_input)
chat_output := TextList()
chat_output.set_pos("10", "29")
chat_output.set_size("669", "127")
chat_output.set_draw_border()
chat_output.set_attribs("bg=very pale whitish yellow")
chat_output.set_contents([""])
self.add(chat_output)
image_1 := Image()
image_1.set_pos("686", "31")
image_1.set_size("106", "120")
image_1.set_filename("nmsulogo.gif")
image_1.set_internal_alignment("c", "c")
image_1.set_scale_up()
self.add(image_1)
end
initially
self.Dialog.initially()
end
#
# N3Dispatcher is a custom dispatcher. Currently it knows about 3D
# subwindows but we will extend it for networked 3D applications.
#
class N3Dispatcher : Dispatcher(subwins, nets, connections)
method add_subwin(sw)
insert(subwins, sw)
end
method do_net(x)
write("do net ", image(x))
end
method do_nullstep()
local moved, dor
thistimeofday := gettimeofday()
thistimeofday := thistimeofday.sec * 1000 + thistimeofday.usec / 1000
if (delta := thistimeofday - \lasttimeofday) < 17 then {
delay(17 - delta)
}
lasttimeofday := thistimeofday
if xdelta ~= 0 then {
cam_move(xdelta)
moved := 1
}
if ydelta ~= 0 then {
cam_orient_yaxis(ydelta)
moved := 1
}
if lookdelta ~= 0 then {
looky +:= lookdelta; moved := 1
}
every (\((dor := !(world.curr_room.exits)).delt)) ~=== 0 do {
if dor.delta() then moved := 1
else dor.done_opening()
}
if \moved then {
Eye(posx,posy,posz,lookx,looky,lookz)
return
}
end
method cam_move(dir)
local deltax := dir * cam_lx, deltaz := dir * cam_lz
if world.curr_room.disallows(posx+deltax,posz+deltaz) then {
deltax := 0
if world.curr_room.disallows(posx+deltax,posz+deltaz) then {
deltaz := 0; deltax := dir*cam_lx
if world.curr_room.disallows(posx+deltax,posz+deltaz) then {
fail
}
}
}
#calculate new position
posx +:= deltax
posz +:= deltaz
#update look at spot
lookx := posx + cam_lx
lookz := posz + cam_lz
end
#
# Orient the camera
#
method cam_orient_yaxis(turn)
#update camera angle
cam_angle +:= turn
if abs(cam_angle) > 2 * &pi then
cam_angle := 0.0
cam_lx := sin(cam_angle)
cam_lz := -cos(cam_angle)
lookx := posx + cam_lx
lookz := posz + cam_lz
end
global lasttimeofday
#
# Execute one event worth of motion and update the camera
#
method do_cve_event()
local ev, dor, dist, closest_door, closest_dist, L := Pending()
case ev := Event() of {
Key_Up: {
xdelta := 0.05
while L[1]===Key_Up_Release & L[4]===Key_Up do {
Event(); Event(); xdelta +:= 0.05
}
cam_move(xdelta) # Move Foward
}
Key_Down: {
xdelta := -0.05
while L[1]===Key_Down_Release & L[4]===Key_Down do {
Event(); Event(); xdelta -:= 0.05
}
cam_move(xdelta) # Move Backward
}
Key_Left: {
ydelta := -0.05
while L[1]===Key_Left_Release & L[4]===Key_Left do {
Event(); Event(); ydelta -:= 0.05
}
cam_orient_yaxis(ydelta) # Turn Left
}
Key_Right: {
ydelta := 0.05
while L[1]=== Key_Right_Release & L[4] === Key_Right do {
Event(); Event(); ydelta +:= 0.05
}
cam_orient_yaxis(ydelta) # Turn_Right
}
Key_PgUp |
"w": looky +:= (lookdelta := 0.05) #Look Up
Key_PgDn |
"s": looky +:= (lookdelta := -0.05) #Look Down
"q": exit(0)
"d": {
closest_door := &null
closest_dist := &null
every (dor := !(world.curr_room.exits)) do {
if not find("Door", type(dor)) then next
dist := sqrt((posx-dor.x)^2+(posz-dor.z)^2)
if /closest_door | (dist < closest_dist) then {
closest_door := dor; closest_dist := dist
}
}
if \closest_door then {
if \ (closest_door.delt) === 0 then {
closest_door.start_opening()
}
else closest_door.done_opening()
closest_door.delta()
}
}
-166 | -168 | (-(Key_Up|Key_Down) - 128) : xdelta := 0
-165 | -167 | (-(Key_Left|Key_Right) - 128) : ydelta := 0
-215 | -211 | (-(Key_PgUp|Key_PgDn)-128): lookdelta := 0
}
Eye(posx,posy,posz,lookx,looky,lookz)
end
method message_loop(r)
local L, dialogwins, x
connections := []
dialogwins := set()
every insert(dialogwins, (!dialogs).win)
every put(connections, !dialogwins | !subwins | !nets)
while \r.is_open do {
if x := select(connections,1)[1] then {
if member(subwins, x) then {
&window := x
do_cve_event()
}
else if member(dialogwins, x) then do_event()
else if member(nets, x) then do_net(x)
else write("unknown selector ", image(x))
# do at least one step per select() for smoother animation
do_nullstep()
}
else do_validate() | do_ticker() | do_nullstep() | delay(idle_sleep)
}
end
initially
subwins := set()
nets := set()
dialogs := set()
tickers := set()
idle_sleep_min := 10
idle_sleep_max := 50
compute_idle_sleep()
end
class Subwindow3D : Component ()
method resize()
compute_absolutes()
# WAttrib(cwin, "size="||w||","||h)
end
method display()
initial please(cwin)
Refresh(cwin)
end
method init()
if /self.parent then
fatal("incorrect ancestry (parent null)")
self.parent_dialog := self.parent.get_parent_dialog_reference()
self.cwin := (Clone ! ([self.parent.get_cwin_reference(), "gl",
"size="||w_spec||","||h_spec,
"pos=14,195", "inputmask=mck"] |||
self.attribs)) | stop("can't open 3D win")
self.cbwin := (Clone ! ([self.parent.get_cbwin_reference(), "gl",
"size="||w_spec||","||h_spec,
"pos=14,195"] |||
self.attribs))
set_accepts_focus()
dispatcher.add_subwin(self.cwin)
end
end
# link "world"
global modelfile
procedure main(argv)
local d
modelfile := argv[1] | stop("uasge: jeb1 modelfile")
world := FakeWorld()
#
# overwrite the system dispatcher with one that knows about subwindows
#
gui::dispatcher := N3Dispatcher()
d := Untitled()
d.show_modal()
end
link model
global world
procedure make_model(cooridoor)
local fin, s, r
fin := open(modelfile) | stop("can't open ", image(modelfile))
while s := readlin(fin) do s ? {
if ="#" then next
else if ="Room" then {
r := parseroom(s, fin)
put(world.Rooms, r)
world.RoomsTable[r.name] := r
if /posx then {
r.calc_boundbox()
posx := (r.minx + r.maxx) / 2
posy := r.miny + 1.9
posz := (r.minz + r.maxz) / 2
lookx := posx; looky := posy -0.15; lookz := 0.0
}
}
else if ="Door" then parsedoor(s,fin)
else if ="Opening" then parseopening(s,fin)
# else write("didn't know what to do with ", image(s))
}
close(fin)
end
#CONTROLS:
#up arrow - move foward
#down arrow - move backward
#left arrow - rotate camera left
#right arrow - rotate camera right
# ' w ' key - look up
# ' s ' key - look down
# ' d ' key - toggle door open/closed
#if you get lost in space (may happen once in a while)
#just restart the program
$include "keysyms.icn"
#GLOBAL variables
global posx, posy, posz # current eye x,y,z position
global lookx, looky, lookz # current look x position and so on
global cam_lx, cam_lz, cam_angle # eye angles for orientation
global xdelta, ydelta, lookdelta
global Rooms
procedure please(d)
local r
&window := d
WAttrib("texmode=on")
#initialize globals
# posx := 32.0; posy := 1.9; posz := 2.0
# lookx := 32.0; lookz := 0.0; looky := 1.75
cam_lx := cam_angle := 0.0; cam_lz := -1.0
# render graphics
make_model()
every r := !world.Rooms do {
if not r.disallows(posx, posz) then
world.curr_room := r
}
every (!world.Rooms).render(world)
xdelta := ydelta := lookdelta := 0
dispatcher.cam_move(0.01)
Eye(posx,posy,posz,lookx,looky,lookz)
# ready for event processing loop
end
# fakeworld - minimal nsh-world.icn substitute for demo
record fakeconnection(connID)
class FakeWorld(
current_texture, d_wall_tex, connection, curr_room,
d_ceil_tex, d_floor_tex, collide, Rooms, RoomsTable
)
method find_texture(s)
return s
end
initially
Rooms := []
RoomsTable := table()
collide := 0.8
connection := fakeconnection()
d_floor_tex := "floor.gif"
d_wall_tex := "walltest.gif"
d_ceil_tex := d_wall_tex
end
### Ivib-v2 layout ##
#...blah blah machine-generated comments omitted...
model.icn
Class Room()
Class Room() is the most important, and is presented in its entirety.
From Box we inherit the vertices that bound our rectangular space.
class Room : Box(floor, # "wall" under our feet
ceiling, # "wall" over our heads
obstacles, # list: things that stop movement
decorations, # list: things to look at
exits, # x-y ways to leave room
name
)
A room disallows a move if: (a) outside or (b) something in the way.
The margin of k meters reduces graphical oddities that occur if
the eye gets too near what it is looking at. Note that JEB doors are
kind of narrow, and that OpenGL's graphical clipping makes it relatively
easy to accidentally see through walls.
method disallows(x,z)
if /minx then calc_boundbox()
# regular area is normally OK
if minx+1.2 <= x <= maxx-1.2 & minz+1.2 <= z <= maxz-1.2 then {
every o := !obstacles do
if o.disallows(x,z) then return
fail
}
# outside of regular area OK if an exit allows it
every e := !exits do {
if e.allows(x,z) then {
if minx <= x <= maxx & minz <= z <= maxz then {
# allow but don't change room yet
}
else {
curr_room := e.other(self) # we moved to the other room
}
fail
}
}
return
end
Method render() draws an entire room.
method render()
every ex := !exits do ex.render()
WAttrib("texmode=on")
floor.render()
ceiling.render()
every (!walls).render()
every (!obstacles).render()
every (!decorations).render()
end
The following add_door method tears a hole in a wall. It needs extending to
handle multiple doors in the same wall, and to handle xplane walls. These
and many other features may actually be in model.icn; the code example in
class is a simplified summary.
method add_door(d)
put(exits, d)
d.add_room(self)
# figure out what wall this door is in, and tear a hole in it,
# for example, find the wall the please door is in,
# remove that wall, and replace them with three
every w := !walls do {
c := w.coords
if c[1]=c[4]=c[7]=c[10] then {
if d.x = c[1] then write("door is in xplane wall ", image(w))
}
else if c[3]=c[6]=c[9]=c[12] then {
if abs(d.z - c[3]) < 0.08 then { # door is in a zplane wall
# remove this wall
while walls[1] ~=== w do put(walls,pop(walls))
pop(walls)
# replace it with three segments:
# w = above, w2 = left, and w3 = right of door
w2 := Wall ! ([w.texture] ||| w.coords)
w3 := Wall ! ([w.texture] ||| w.coords)
every i := 1 to *w.coords by 3 do {
w.coords[i+1] <:= d.y+d.height
w2.coords[i+1] >:= d.y+d.height
w2.coords[i] >:= d.x
w3.coords[i+1] >:= d.y+d.height
w3.coords[i] <:= d.x + d.width
}
put(walls, w, w2, w3)
return
}
}
else { write("no plane; giving up"); fail }
}
end
Rooms maintain separate lists for obstacles and decorations.
Obstacles figure in collision detection.
method add_obstacle(o)
put(obstacles, o)
end
method add_decoration(d)
put(decorations, d)
end
Monster State
You may have worked some of this out already, or be working on it;
I just want to push forward.
Extending CVE Network Protocol
It may be painful for server to perform certain computations
currently performed by client, such as for each shot, what it hit.
From client to server and from server to other clients | description | From server to client | description |
---|---|---|---|
\fire dx dy dz target | send this every shot. coordinates are direction vector. target is entity hit according to client | \damage target amount | server sends its assessment of damage. response to \fire. |
\weapon userid X | change to weapon X where is is one of: spear, pistol, shotgun | \death target | death is like... notification to de-rez someone's avatar, short of them actually logging out, which would disconnect |
\avatar ... | notification to rez someone's avatar. Our respawn command. | ||
\inform ... | suggests it just posts a message to clients' chat boxes; occurs during regular login and might need to occur on respawn | ||
\avatar raptor11 ... \avatar akyl7 | notification to rez someone's avatar. Our respawn command. |
\fire userid dx dy dz null 0
\fire userid dx dy dz raptor7 15
Another option is to search for higher (-level) ground, such as filling in some of the gaps between the ~2.5K LOC jeb demo and the FPS genre. The main differences between wandering around the halls of a CS department (the JEB demo) and Wolfenstein 3D or Doom are:
You really need to see some sample S3D files in order to get a feel for the beauty of the S3D file format. s3dparse.icn, and S3D files themselves, may have various usually-nonfatal "bugs". There was even a bug in the S3D file format document.
We also took a look at the desperate situation vis a vis creating 3D models (a job performed by experts with much training) and our need to build such models for our games and virtual environments. Let us continue from there.
Dr. J is 5'9" (1.75m), his elbow-elbow width is approximately 24" (0.61m) and his front-back is 11" (.28m) at the belly.
(*this is a trick question)
drawavatar( ? (world.Rooms) )This invokes some hardwired code to render an avatar in a randomly selected room. The procedure to render the digital photos as is, prior to any 3D modeling, might look like:
procedure drawavatar(r) # place randomly in room r myx := r.minx + ?(r.maxx - r.minx) myy := r.miny myz := r.minz + ?(r.maxz + r.minz) # ensure a meter of room to work with myx <:= r.minx + 1.0; myx >:= r.maxx - 1.0 myz <:= r.minz + 1.0; myz >:= r.maxz - 1.0 PushMatrix() Translate(myx, myy, myz) WAttrib("texmode=on","texcoord=0,0,0,1,1,1,1,0") Texture("jeffery-front.gif") FillPolygon(0,0,0, 0,1.75,0, .61,1.75,0, .61,0,0) Texture("jeffery-rear.gif") FillPolygon(0,0,.28, 0,1.75,.28, .61,1.75,.28, .61,0,.28) Texture("jeffery-left.gif") FillPolygon(0,0,0, 0,1.75,0, 0,1.75,.28, 0,0,.28) Texture("jeffery-right.gif") FillPolygon(.61,0,.28, .61,1.75,.28, .61,1.75,0, .61,0,0) PopMatrix() end
// version 103 // numTextures,numTris,numVerts,numParts,1,numLights,numCameras 4,8,8,1,1,0,0 // partList: firstVert,numVerts,firstTri,numTris,"name" 0,8,0,8,"drj" // texture list: name jeffery-front.gif jeffery-rear.gif jeffery-right.gif jeffery-left.gif // triList: materialIndex,vertices(index, texX, texY) 0, 0,0,256, 1,0,0, 2,256,0 0, 0,0,256, 2,256,0, 3,256,256 1, 4,0,256, 5,0,0, 6,256,0 1, 4,0,256, 6,256,0, 7,256,256 2, 7,0,256, 6,0,0, 1,256,0 2, 7,0,256, 1,256,0, 0,256,256 3, 3,0,256, 2,0,0, 5,256,0 3, 3,0,256, 5,256,0, 4,256,256 // vertList: x,y,z 0,0,0 0,1.75,0 .61,1.75,0 .61,0,0 .61,0,.28 .61,1.75,.28 0,1.75,.28 0,0,.28 // lightList: "name", type, x,y,z, r,g,b, (type-specific info) // cameraList: "name", x,y,z, p,b,h, fov(rad)
Design note #1: parsing and rendering constitute enough behavior to go ahead and make a class (or maybe a built-in) out of this. However, Jafar has written far more sophisticated code we will prefer to use.
Design note #2: while we can make a generic S3D renderer fairly easily, to animate body parts (legs, arms, etc), our model will need to insert Rotation capabilities at key articulation points. We will consider this and the S3D part mechanism after we get "out of the box" into a higher polygon count.
Performance note: in "real life" there are polygon "mesh modes" that would allow several/many triangles in a single call. This is the kind of thing that using Jafar's classes would give you, over doing it yourself. Note that at one time I began planning a u3d file format as a minor simplification based on s3d.
procedure draws3d(r) loads3d("drj.s3d") # place somewhere in room r myx := r.minx + ?(r.maxx - r.minx) myy := r.miny yz := r.minz + ?(r.maxz + r.minz) # ensure a meter of room to work with myx <:= r.minx + 1.0 myx >:= r.maxx - 1.0 myz <:= r.minz + 1.0 myz >:= r.maxz - 1.0 PushMatrix() Translate(myx, myy, myz) WAttrib("texmode=on") every i := 1 to triCount do { tri := triangleRecs[i] v1 := vertexRecs[tri.vi1 + 1] v2 := vertexRecs[tri.vi2 + 1] v3 := vertexRecs[tri.vi3 + 1] Texture(textureRecs[tri.textureIndex + 1]) | stop("can't set texture ", textureRecs[tri.textureIndex + 1]) WAttrib("texcoord=" || utexcoord(tri.u1,tri.v1) || "," || utexcoord(tri.u2,tri.v2) || "," || utexcoord(tri.u3,tri.v3)) FillPolygon(v1.x,v1.y,v1.z, v2.x,v2.y,v2.z, v3.x, v3.y, v3.z) } PopMatrix() end
Next let's see what the Rabin chapter on Character Animation has to say about 3D Modeling. Chapter 5.2, Character Animation (Chapter 5.2, original).
(lecture covered slides 1-14).
from: warrior.x and warrior.gif
While this course isn't about networks, games use them and it is appropriate to provide a brief introduction to network programming. Especially if you have never done network programming before, you should read Chapters 5 and 15 of the Unicon book for a discussion of network programming in Unicon. For other languages you wish to use, you should seek out (and try out) their comparable functionality.
Stream-oriented protocols are usually more human-readable, with ASCII text line-oriented message formats. For example, HTTP protocol sends its headers as a sequence of lines with a easily readable format like:
Fieldname: value Fieldname2: value2... ending with a blank line, after which the data payload follows.
This was immediately followed by a "fork-exec" model, in which each incoming connection triggers a new process, so that multiple users can be served simultaneously. Separate server processes for each user gives good fault tolerance (one user's server process crashing might not affect others') and poor/slow communication for applications where users interact with each other via the server.
Since process creation is slow, "fork-exec" has been replaced by various newer models, including farming the work out to a pool of pre-created processes, and using threads instead of processes.
Context switching between processes is very slow, and even switching between threads is pretty slow. In addition, communication between processes or even threads is slow. For these reasons, modern multi-user servers might have each thread handling several user connections -- especially if certain users tend to communicate together a lot. The number of users per thread might depend on how CPU-intensive the server threads' tasks are in support of each user -- if the server has to do a lot of work for each user transaction, it is easier to justify a separate thread for each user.
I am not the only person crazy enough to propose garage-scale MMO development. Note that without a certain level of 3D graphics capability we cannot undertake this goal at all, and unless we find a way to make 3D graphics quite easy, it is far beyond our available resources.
Because writing a CVE is potentially so incredibly technically challenging, there is a danger that the only people who can do it are large multi-million-dollar industry labs. In this class we are interested in CVEs as vehicles for both direct and indirect research:
There is foreground awareness and background awareness. Things should be in the background unless/until they start interfering with what you are doing.
Background awareness may include users' real locations and schedules, whether they are at their keyboard and looking at the screen at the moment, what task they are performing, etc.
With a compelling proof-of-feasibility like Everquest in mind, we cannot help but believe that a CVE will soon dominate many fields of remote communication and endeavor. It is only a matter of time before CVE's are used for distance education, virtual dating and sex, live theater, circus and other public performances, as well as major meetings such as conferences, associations, and the activities of governmental organizations.
One of the reasons to study content creation is to test its limitations and see what ideas ought to be present in future virtual worlds we might build.
Proposal: add 3D model file "parts" for each avatar body part in the model. Write a new subclass of Avatar and of Body Part to work off of (and be populated from) the S3D data.
procedure parseroom(s,f) local t, r t := parseplace(s,f)It then builds a room object:
r := Room(t["name"], t["x"], t["y"], t["z"], t["w"], t["h"], t["l"], t["texture"])Mapping the table t (all contents of the .dat) to the Room r is a double-edged sword. On the pro side, fields in the .dat can be in any order, and extra fields cause no harm. (If a field is missing from a room, the Room constructor had better have a default it can use.) On the con side, an extra memory copy is happening here that could be avoided if the instance itself were passed in and populated. parseplace() is highly polymorphic (one code used for many types of objects composed from fields) and that would complicate its internals.
Procedure parseplace() builds the table (a set of fieldname keys and associated values). The place is terminated by a "}", when it is by itself and not part of a field (parsed here by parsefield()).
procedure parseplace(s,f) local t, line t := table() while line := readlin(f) do line ? { tab(many(' \t')) if ="}" then break if &pos = *&subject+1 then next parsefield(t, tab(0), f) } return t end
parsefield()
grabs a field name (delimited at present by
space/tab characters),
which will serve as a key in the table. It then calls parseval() to parse
a value, which may itself be a complex structure.
procedure parsefield(x,s,f) local field, val s ? { tab(many(' \t')) (field := tab(upto(' \t'))) | { write("fieldname expected: ", image(tab(0))) runerr(500, "model error") } tab(many(' \t')) val := parseval(tab(0),f) if field=="texture" then val := world.find_texture(val) if (field == "action") then { /(x["actors"]) := [] put(x["actors"], 1) } x [field] := val } endA value by default might simply be an arbitrary string after the fieldname, extending to the end of the line. There are three special cases which have more complex semantics: a numeric constant, a Wall object, and a list.
procedure parseval(s,f) local val s ? { tab(many(' \t')) if val := numeric(tab(many(&digits++"."))) then return val else if ="Wall" then return parsewall(tab(0), f) else if ="[" then return parselist(tab(0), f) else return trim(tab(0)) } end
If we chase inside parselist() we would find that other virtual objects must appear inside a list object, while Walls do not. It seems odd (and bad design, basically) to single out Wall() here as a special syntactic entity.
Dr. J should add additional notes here on parsewall() and parselist().
The recommended way to test your room data + textures is to run your sample data on the jeb1 demo. At least one student reported the jeb1 demo not running for them on Windows. If ran for me on Vista laptop and on XP on VMware on Linux... but if you are having difficulties, see me for help, or try another machine. Oh: what image file formats are you trying to use? .gif is safe. .jpg and .png are "maybes". libjpeg worked on linux and not on windows last time I checked. Jafar claims we have libpng support, but not sure if that's gotten built into windows yet or not, either.
put(grouping, moveuid || "part " || name || " " || dir || " " || ang)are followed (at the end of actions()) by a call to flushnet()
method flushnet() if session.isUp() then { session.Write(grouping) grouping := list() } endSession's Write() method bundles up a list of strings as a single string, so it gets sent as a single packet. Where does it go then? The server receives these commands... and does what?
# server.icn::run() if not (L := select( socket_list, Ladmins )) | *L=0 then next ... if buffer2 := Tsock_pendingin[sock] || sysread( sock ) then { ... buffer2 ? { while buffer := tab(find("\n")) do { ExecuteCommand( sock ) ... "move": { dynStHandler.saveAvatarState(Cmds,sock,Tsock_user, parsed[2]) dynStHandler.getRecepientUsers(Cmds, sock, Tsock_user, Tuser_sock, TrecpUser_sock, "AvtMove",parsed[2]) sendtoSelected(sock, TrecpUser_sock, parsed[2], "move", 1)Saving state involves writing to server local disk. getRecepientUsers is another matter.
Lecture XX. Future Trends in Games and Virtual Environments
Methods of locating someone else in virtual space. Some of these are graphical, some textual, and some could be either. Perhaps some CVE's would eschew some of these techniques in order to be "realistic". What other methods can you think of?
In real-world-based CVE's, there may not be "missions" but there may still be goals, such as: complete a homework assignment so that it passes an automatic submission tester.
One interesting point is: people cannot always remember the details of their goals: where to go, what to do, etc. They sometimes wind up writing down the instructions they were given by a (computer-controlled) character in the game. It is sort of obvious that the computer should provide assistance with this task, provide some of the capabilities of a Personal Digital Assistant, such as a Todo list. City of Heroes does this rather nicely.
Robinson et al distinguish between upward scalability (more people) and sideways scalability (different people). Besides scaling users and groups, they argue for more different kinds of objects in CVE's, especially objects with real-world presence (machines, printers, files), where manipulating the object in the CVE causes real-world work to get done.
Should all this work happen inside the CVE? Robinson argues for the 3D part of a CVE to be only one of many different collaboration programs, complementing other forms such as document viewers, web, and audio/video connections. In support, they observe that the 3D CVE's usually overemphasize the people, while other applications usually underrepresent them.
They are arguing that all our mainstream applications should become CVEs and propose a VIVA architecture along these lines. There are many pros to this approach, such as accessibility and interoperability when 3D graphics are not available, from the web or a PDA, etc. What are the drawbacks to trying to make all our regular applications CVE's? Do Robinson et al identify those drawbacks?
Other ideas:
Besides "master servers", VIVA uses at least 6 kinds of special-purpose servers. Traditional services of "VR servers": spatial data processing, collision detection, awareness and authorization services, environment partitioning. Dynamic repartitioning is seen as central to scaling to more users.
[Snowdon96]: A review of distributed architectures for networked virtual reality. Virtual Reality: Research, Development, and Applications 2(1), 155-175. gives a reference architecture consisting of:
Run-time error 107 File cve.icn; Line 244 record expected offending value: &null Traceback: main() make_SH167(...parameters...) from line 73 in please8.icn Room_add_door(...parameters...) from line 19 in please8.icn {&null . coords} from line 244 in cve.icn
Lecture 31. Future Trends in Games and Virtual Environments
In between silence and talking there is a continuum comprising mutual sense of presence, body movement, mutual gaze awareness, and the trajectory of body motion (towards someone, away from them, on a route unrelated to them, etc.). "Sleepy mode": avatar looks like an ice cream cone.
Contact Space and Meeting Space: the lounge versus the seminar room. The big difference is whether others can interrupt.
Nessie world: different rooms for different working contexts. Avatars are Lego puppets. Agent avatars signal time of day (waiters, janitors?), active virtual furniture shows external values and activities (temperature? stock values?). Projecting the CVE in the background: on the wall, or maybe on the root window?
Experience results: the "meeting space" won't be the focus during meetings, the focus is on the material presented, documents being reviewed, etc. People will want to customize their avatars, but do not need (or want) them to look exactly like in real life.
Who is talking in the CVE? Small window size means this vital information may need to be exaggerated.
When is symbolic acting unfortunate: when it embarrasses you publically, because you want to quietly working on something else (say, surf a website) while in a meeting in the CVE in another window. Sometimes you don't want the system reporting your every action to others!
More issues: security can be a problem; no one wants to have another account to login to; contact space needs to cooperate/integrate with e-mail, telephone, etc.; contact space needs to be accessible via PDA/cell phone, etc.
The Forum is not just chat, it is the ability to comfort, monitor, increase awareness, and observe others.
Dr J's idea of the day: "selecting" (clicking on) another avatar with your mouse might send an acknowledgement message to the other person, to let them know you are looking at them and they have your attention, as a precursor, if they wish to chat.
This course is not about 3D Graphics: we won't be covering advanced algorithms for photo-quality rendering like they use in the CG movies. But, everybody can learn enough 3D graphics to be useful for your project.
In practice, complex objects are composed from simpler objects. Each simpler object that is a part of the more complex object is given by specifying their location and orientation with respect to the complex object. This is how, for example, you might attach an arm to a torso, or attach different pieces to a table or lamp or whatever.
Location and orientation are more generally given by the operations Translation, Rotation, and Scaling. A basic result of early work in computer graphics was to combine and apply all three operations via a single matrix multiplication. We don't have to write the matrix multiplication routine, see CS 476 or a linear algebra class for that. We can just enjoy the fruit of their labors as manifested in our 3D graphics library (OpenGL) and a higher level API built on top of it.
At some point the "outermost" objects (say, an entire table or an entire person) are placed into the virtual world by similarly specifying the object's location and orientation with respect to World Coordinates.
Rendering an object in room coordinates example:
PushMatrix() Translate(o.x, o.y, o.z) # position object within World Coordinates o.render() # object rendered in Object Coordinates PopMatrix()Note that if a subobject is rotated relative to its parent object, the rotation will look crazy unless the subobject is first translated to the origin, then rotated, then translated back to its intended position.
These primitives are further flexified by the Scale() function. When stretched (via scaling), primitives like DrawCube() can handle any rectangular shape.
OpenGL has 8 lights, which can be turned on or off, positioned at specific locations, and can feature any mixture of three different kinds of light: diffuse, ambient, and specular. Diffuse seems to be the dominant light type, with the others modifying it. In the example:
WAttrib(w,"light0=on, ambient blue-green","fg=specular white")Objects would look their normal (diffuse) color given by their foreground ("fg") attribute, except there would be a bit of blue-green on everything from the lighting, and objects that have very much shinyness (read your manuals!) will reflect a lot of white on the shiny spots.
In addition, if you are not using a texture, the "fg" attribute for an object can include an object's appearance under the three kinds of light, and can include a fourth kind of light, emission, where the object glows all on its own.
Fg(w, "diffuse light grey; ambient grey; _ specular black; emission black; shininess 50")
One thing that was added recently to the 3D facilities is the ability to blend the texture and the fg color when drawing an object ("texmode=blend"). One thing that is going to be added in the future (as soon as I get a student to help) is to add a set of predefined / built-in textures ( "brick", "carpet", "cloth", "clouds", "concrete", "dirt", "glass", "grass", "grill", "hair", "iron", "marble", "metal", "leaf", "leather", "plastic", "sand", "skin", "sky" "snow", "stone", "tile", "water", and "wood").
Reading: Unicon book, chapters 5 and 15.
Networking support in Unicon was designed by Dr. Shamim Mohamed (Logitech, Inc. of Silicon Valley) with a little help from Clint Jeffery, implemented for UNIX by Shamim Mohamed, and ported to Windows by Li Lin (M.S. student) and Clinton Jeffery. These capabilities are simple, easy to use communication mechanisms using the primary Internet application protocols, TCP and UDP. Unicon also has "Messaging facilities", providing support for several popular network protocols at a higher level than the network facilities (HTML, POP, ...), done by Steve Lumos (M.S. student).
Besides the IP number identifying a particular machine, most Internet services specify a port at which communication takes place; the ports serve to distinguish different programs or services that are all running or available on a given server. The ports with small numbers (say, the first few hundred) have standard services associated with them, while higher numbered ports can have arbitrary server-defined associations to custom applications. Ports providing standard services can usually only be run by the administrator of a machine; ordinary end users can generally use higher numbered ports.
"Kansas" - 2D, programming environment for the language "Self". Self is a delegation-based descendant of smalltalk.
"Field of view in most CVE's is so narrow that other avatars are usually off-screen".
Capabilities could be used anywhere in a CVE, but perhaps the user interface is a sufficient place for them. A capability can itself have a visible manifestation (like the piece of chalk, or the microphone). A "capability tray" might hold (and allow easy sharing) of a user's entire capability set.
Big problem with scalability (too many pages to view). Cluster pages together into 1 sphere per site.
WWW3D shows very little page contents, mainly shows links; color codes pages by how recently they were viewed. Web planetarium uses the first image in the page as a texture (often a logo or person).
From public demo: users avoided following links to "warp" new sites into the 3D layout. They preferred to wander around a landscape that is already created.
it is impossible to allow dynamic shared state to change frequently and guarantee that all hosts simultaneously access identical versions of that state"
Variation: distributed shared repository, in which different dynamic state is managed on different machines. "Virtual" centralized repository.
Idea: how consistent your information about others is can be proportional to their importance to you or their proximity to you; this doesn't have to be a boolean visible/too-far-away condition. What about updating with frequency proportional to distance? Server could compute: should I send user X's move to user Y? with probability = (1.0 - distance) * (1.0 - direction)
Specific machine "owns" the object for which it broadcasts updates; more complicated for others to modify the state of that object. Works best in a LAN setting (many early LAN games used this model).
Concept: "lock lease" = locks that automatically timeout.
Latencies of 250ms are not uncommon on WANs.
Jitter = variation in latency from one packet to the next.
prediction = how we calculate current state based on previous packets. commonly using derivative polynomials (velocity, acceleration, and possibly "jerk"). order 0 = state regeneration technique. order 1 adds velocity. order 2 (with acceleration) is "the most popular in use today". Note: if acceleration is changing each packet, using it generates a lot of errors. Good to disable acceleration dynamically when it is not helping, maybe use it when it is nonzero and consistent for 3+ updates in a row.
derivative polynomials don't take into account our knowledge about the semantics of the object. Separate dead reckoning for each class of virtual object?
convergence = how we correct error. instead of "jumping" to correct, we might smoothly adjust. goal = "correct quickly without noticable visual distortion". "snap convergence" just lives with the distortion.
linear convergence: given the corrected coordinates, predict where the object will be in 1 second. Now, compute the prediction values for the object so that it moves from its current, erring position to where it is supposed to be a second from now. (what if this runs through a wall?)
To do better: use a curve-fitting algorithm, maybe a cubic spline.
In the case of 3D windows, we might use a similar strategy but instead keep a display list, which is a data structure that contains all the data about all graphics operations that have been performed since the last time the 3D window was opened or erased.
OpenGL has a display list concept, but its display lists would not be easily manipulated from the Unicon application level, so we maintain our own display list as a regular Icon/Unicon list. Each element of the list is a list or record produced as a by-product of a 3D output primitive (either a 3D function call, or an attribute that was set) written on that window. Unfortunately, the elements of the display lists are somewhat underdocumented at presents, so we will describe them in detail here.
L := WindowContents(w) every i := 1 to *L do { writes(i, ": ", image(L[i]), " -> ") every j := 1 to *(L[i]) do writes(image(L[i, j]), " ") write }
3D Function | type | notes |
---|---|---|
gl_torus(name, x, y, z, radius1, radius2) | ||
gl_cube(name, x, y, z, length) | ||
gl_sphere(name, x, y, z, radius) | ||
gl_cylinder(name, x, y, z, height, radius1, radius2) | ||
gl_disk(name, x, y, z, radius1, radius2, angle1, angle2) | ||
gl_rotate(name, x, y, z, angle) | ||
gl_translate(name, x, y, z) | ||
gl_scale(name, x, y, z) | ||
PushMatrix | gl_pushmatrix(name) | |
gl_popmatrix(name) | ||
gl_identity(name) | ||
gl_matrixmode(name, mode) | ||
Texture | gl_texture(name, texture_handle:integer) | internal code used by OpenGL |
Fg | ["Fg", ["material", r, g, b], ... ] |
Attribute | type | notes |
---|---|---|
linewidth | ["linewidth", width] | |
dim | ["dim", i] | |
texmode | ["texmode", i] | |
Texcoord | ["Texcoord", val] |
We will run out of texture memory sooner or later, but it needs to be later.
tex := Texture(w, s) ... Texture(w, s, tex)Will modify an existing texture on the display list, instead of creating a new one. It will also set the current texture to tex.
How soon? Well, I put the prototype code into ~jeffery/unicon/unicon last night, but it isn't tested or checked out on Windows yet. I'll try for ASAP.
Q: Why and for what, do we need file transfers?
A: User-supplied textures. New data and code files. Patching the
executable.