COPYRIGHT NOTICE. COPYRIGHT 2007-2020 by Clinton Jeffery. For use only by the University of Idaho CS 428/528 class.

Lecture Notes for CS 428/528 Multi-User Games and Virtual Environments

spring break,
Covid-19,
etc.

Syllabus

What's This Course About?

HW#0

The Big Questions for CS 428

Research question: how can we reduce the effort required to develop 3D and multi-player computer games?
It is fair game for us to cover in class, and do, stuff that might advance a research objective.
Development question: What CS skills can we learn or improve by studying game design and development?
It is desirable for this class to deliver CS skills of broad value to your other CS endeavors.

Interleaving 3D Topics with Network Programming Topics

Last time I taught this, the plan was 3D first half, network programming second half. This proved to be a mistake. This time around you can expect

Reading

Read/skim [Steed/Oliveira] chapter 1.

Highlights from [NG] Chapter 1

Illusion of a Shared Virtual Environment
Imaginary place, experienced together
RPGs
Text adventures were invented in the mid-1970's in homage to the Dungeons & Dragons tabletop experience, a shared virtual environment with no computer needed.
MUDs and MOOs
By the 1980's dialup modems and ARPAnet allowed multi-user text adventures, called MUDs.
DIVE system (1991)
By the 1990's, on expensive hardware you could find the earliest 3D multi-user research systems.
EverQuest (1999)
Explicitly a 3D MUD, down to the text /-based command lineinterface. WoW is mostly a lightweight knock-off.
"Differences" between a CVE and typical net apps

Preparing for 3D Graphics Programming

Preparing for 3D graphics programming involves reviewing or learning some underlying math and hardware concepts and terminology, after which we can start cranking some examples.
Total Beginner's Guide to 3D Graphics Theory
Beginner's Guide to Learning 3D Computer Graphics, from 1:21 through 7:20
According to the video, 3D graphics consists of: ____________________, ______________________, and ____________________
"Let's Build a 3D Graphics..." tutorial series
Learning Modern 3D Graphics Programming
lecture #2 began here

Reminder

No class on Monday (Martin Luther Day)

Reading

Website for [Steed/Oliveira]: networkedgraphics.org

This site includes slides, code from the book, errata, etc. I think we already covered NG chapter 1 last lecture basically, let's just take a quick peek and see if I missed anything good.

Universal Properties of Virtual Environments?

What would you add to this list?

Which MMO's Should We Try in this Class?

I would prefer that we stick to Free-to-Download and/or Free-to-Play options. Last class we decided to try out the market leader (WoW) for one class session, but having a breadth of experience might help you understand the genre more broadly. We could try out one or more additional virtual environments, particularly if they offer a substantially different experience, and/or are newer and therefore (one hopes) have progressed from the WoW/LOTRO model.

With that in mind, you might consider:

A previous instance of CS 428 tried Runescape out, because it supported Linux. Runescape Linux support seems to be only for Ubuntu and similar Debian-based systems, a fairly common restriction. It also claims to run in a web browser, but maybe the web browser client is only for Internet Explorer running on Windows. As badly as I want Linux support, I don't really want to go back into Runescape.

Are MMO's Dead?

NG Chapter 2

Boids. I am only mildly interestd in Boids, but they are used for a recurring example in [NG], so let's learn about boids.

About network programming, we learned

IP #'s
32-bit integers (IPv4; more for IPv6) that identify machines on the internet, usually assigned by the network adminstration when you connect
ports
16-bit integers that identify applications on a given machine
sockets
kernel resources, similar to files, that manage network connections
TCP
Transmission Control Protocol, a bytestream that guarantees data will eventually arrive, in order.
UDP
User Datagram Protocol, a packet protocol that sends data and guarantees nothing. Maybe it will get to the other end.

Lecture 3

Quick Check: how many have active WoW accounts

Things I Should Add to the Discussion of Sockets and TCP/UDP

Additional comments on [Steed/Oliveira] Chapter 2

Boids Code from Chapter 2

Zip file unpacks to a bin/ and a src/. The bin/ and bin/lib contain four .jar files, one of which is probably compiled from src/ and the other three seem to be vector math and java 3D utilities. We will only look at src/.

Highlights of Di Giuseppe Chapters 1-2

Check out HW#1

Lecture 4

Bblearn Hell?

Bblearn for this class has been turned on. There were some troubles with it today, so if it is misbehaving for you, please let me know.

Have you used UI's VPN?

In order to run a server of our own, and access from off campus, the easiest thing would be for us to all get the VPN working. It seems to work well from both Windows and Linux (and I presume, MacOS) so let's get it going for HW#2.

Some Elements of 3D Programming

Vertices

A.k.a. coordinates, 3D tends to use three reals to specify point locations (x,y,z).

World Coordinates

The world coordinate system has a horizontal x, a vertical y, and a z axis that leaps out of the page (or out of our projector) at us.

3D Primitives

To draw anything useful in 3D, you need at least three vertices (9 floats) to specify a triangle. Besides triangles, which are rarely used by themselves, OpenGL features many other primitives, including:


source:openglprojects.in

But really, complex shapes are usually composed of lots of triangles, organized into data structures called 3D models; more on that shortly.

If you develop in C/C++, there are two libraries that are "completely standard" OpenGL: libGL and libGLU.

The more complex 3D primitives from libGLU take parameters such as slices and rings to specify how closely to approximate, and therefore how much of your polygon budget, for any given 3D primitive you want to render. Draw a sphere with 4 slices and 4 rings and you may have something closer to a cube or diamond in appearance.

Light

Lighting in classic OpenGL is pretty counterintuitive, but usable to achieve simple effects. For anything fancy, you have to go to shaders, or require the current high-end GPUs capable of real time ray tracing.
ambient
"base" light that shines equally everywhere
directional
"infinite distance" light, as per the sun
point
light in all directions from an xyz point source, as per a lightbulb
spotlight
a directional point light

Materials

Properties of an object that determine what happens to light that hits it. By default, you have this OR you have a texture, not both.
diffuse
"base" color that an object reflects when light is shone on it
specular
property of an object to reflect directional light
emission
property of an object to emit light of its own

The Camera

Within the world coordinate system, the camera:

A rectangle called the viewport is defined by the near side of the frustom. To view on a 2D display, all objects in the frustum are projected onto this viewport.
   // in the application class
   public PerspectiveCamera cam;

   // ... in the application's create()
   cam = new PerspectiveCamera(67, Gdx.graphics.getWidth(),
                                   Gdx.graphics.getHeight());
   cam.position.set(2, 2, 2);
   cam.lookAt(0, 0, 0);
   cam.near = 1f;
   cam.far = 300f;
   cam.update();

Models, ModelInstances, and ModelBatch

   // in the application class
   public ModelBatch modelBatch;
   public Model model;
   public ModelInstance instance;

   // ... in the create()
   modelBatch = new ModelBatch();
   ModelBuilder modelBuilder = new ModelBuilder();
   model = modelBuilder.createSphere(2, 2, 2, 20, 20,
             new Material(ColorAttribute.createDiffuse(Color.YELLOW)),
             Usage.Position | Usage.Normal);
   instance = new ModelInstance(model);
The code that makes models appear onscreen is given later in render().

Environment

Usually this means: Lights.
   environment = new Environment();
   environment.set(new ColorAttribute(
              ColorAttribute.AmbientLight,
	      0.4f, 0.4f, 0.4f, 1f));
   environment.add(new DirectionalLight().set(
              0.8f, 0.8f, 0.8f, -1f, -0.8f, -0.2f));

Application Class

public class MyModelTest extends ApplicationAdapter {
   public Environment environment;
   public CameraInputController camController;

@Override
public void create() {
   // ... environment code
   // ... camera code
   // ... model code

   camController = new CameraInputController(cam);
   Gdx.input.setInputProcessor(camController);
}

lecture 5

Additional comments on [Steed/Oliveira] Chapter 2

This is more a response to Chapter 2 than a summary of it.

Boids network protocol

Each boid is a line of the form
     posx,posy,posz,velx,vely,velz

# of Net connections

In real life, you can probably get away with hundreds of connections in an application, maybe thousands, but probably not tens-of-thousands.

Synchronous vs. Asynchronous

Synchronous is when we expect all N machines to send their packets each model update time unit (which might or might not, mostly likely not, correspond with the frame rate at which we render graphics. Synchronous tends to not scale great, settling for a lowest-common denominator for overall speed.

Asynchronous is if all N machines just send their packets however fast they get around to it, and nothing is scheduled. Asynchronous is usually better. It takes extra coding to handle asynchronous communications.

Blocking vs. Non-blocking

When you write a network program, you can either wait around for the I/O to complete, or you can keep computing.

LibGDX Net module vs. Alternatives

LibGDX net module has many tutorials, such as: We might not be satisfied with the libGDX Net module for one or more reasons, for example maybe it no grok non-blocking I/O. libGDX net API socket hints does let you specify a timeout time, that might be good enough for clients. If you have to write a server, you may want full non-blocking I/O and a proper working select() function.

Game developers tend to not know network coding, and want to outsource it from some black-box game-network-library. Just cause you pick a 3rd party network library does not mean things will magically be easy. They depend on, and can't do better than, the underlying OS (C) API's and their semantics. Plus, they tend to impose their own additional weirdnesses that tie you to them.

Real-world Speed Check

lecture 6

Wow Status

Updates with Dated Remotes

In real life, at time T, each client will know its own boids' current state, but it will have slightly older state for everybody else's birds. If for every remote bird, you track the time it was last updated, you can estimate its current state for use in calculating your own boids' next position.

Wan Considerations

Frequent data losses

Frequent delays

Frequent connection breaks

Thoughts from the network code in [Steed/Oliveira] chapter 2

Network.java
implements runnable and creates a new thread upon construction. That means you immediately need thread communication and/or synchronization.
How much threads programming have you done?
Java has easy powerful threads facilities. But by the way, libGDX/OpenGL only runs on one thread, so if you do threads, only one thread for graphics/render. And it will reduce your portability (no HTML5 target, for example).
Anyhow, Network::run() shows signs of great simplicity. It doesn't mind burning through CPU:
public void run() {
   while (true) {
      receive();
      Thread.yield();
   }
}
A clean separate 3-thread execution model (one thread for view/graphics, one for controller/network, and the third for model/game) isn't a bad software architecture on machines with 4+ cores. Main thing then would be how the threads communicate.
TCPNetwork.java
Every program creates 3 sockets: a sender, a receiver, and a listener for opening a receiving connection. Pretty bizarre and wasteful. Creates a Listener socket, and then accepts when client calls recv().
BootstrapTCP.java
Launches a TCP boids app, defaults assume you are using a local NAT-translated fake net address of 192.168.1.95. Expect to give the four command line arguments as the default values won't work.

A LibGDX Network Example

Let's just wander into some chat code. We looked at

lecture 7

WoW Instructions

Look for Docjeffery at the south gate of Orgrimmar on Skywall server. If you are not in class, get on Discord Group "UI CS 428". WoW has temporary groups but they are limited to 5 or 6 players, and our class would not fit in one for group text chat. We could create a guild, possibly, but not for a single session like this.

Assessing and Comparing Technical Properties of MUVE's

Think about this in terms of: what would it take to implement? Across all these categories: what is mutable and what is immutable?

Property Name Major Issues WoW's Characteristics
Player Charactersavatar
role
progression
inventory
World size
navigation
open or fenced
known or discovered
Non-Player Characters aid or attack
passive or aggressive
interaction depth
Story multiplicity
depth
climax
Quests voluntary or conscript
number
timeline
Society politics
social standing
benefits
drawbacks
Community guilds
groups
raids
Economy scarcity
gathered vs. crafted
auctions vs. merchants
Violations any breaks in immersion
where the "game" falls flat
lecture 8

Announcements

CVE Network Protocol Initial Discussion

3D Models

assets = new AssetManager();
assets.load("car.g3dj", Model.class);
assets.finishLoading();
model = assets.get("car.g3dj", Model.class);
instance = new ModelInstance(model);

lecture 9

Class was cancelled on Wednesday February 5. Sorry!

lecture 10

3D render()

Each render() is the opportunity to move the camera and its direction, and redraw the scene from scratch, but woe to you if you make updates that are non-incremental.
public void render() {
   camController.update();
   Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(),
                           Gdx.graphics.getHeight());
   Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT |
                  GL20.GL_DEPTH_BUFFER_BIT);
   modelBatch.begin(cam);
   modelBatch.render(instance, environment);
   modelBatch.end();
}

Frustum culling

Don't ask 3D engine to render stuff that won't be in the viewing volume anyhow. Sure it will be invisible but it will slow everything down.

[Oehlke/Nair] gives a 3D example that uses 12 model instances. Will all 12 be visible? If you have a reasonable (small) number of models, brute force is an option:

private boolean isVisible(final Camera cam,
                          final ModelInstance instance) {
   Vector3 position;
   instance.transform.getTranslation(position = new Vector3());
   BoundingBox box = instance.calculateBoundingBox(new BoundingBox());
   return cam.frustum.boundsInFrustum(position, box.getDimensions());
}
... allowing the render() to end up like this:
public void render() {
   ...
   modelBatch.begin(cam);
   for (ModelInstance instance : instances) {
      if (isVisible(cam, instance)) {
         modelBatch.render(instance, environment);
      }
   }
   modelBatch.end();
   ...
}

Ray Picking

OpenGL does not have object selection built-in. There is a virtual 3D world, and then there are physical 2D pixel coordinates that input devices give, and defining the relationship is up to the programmer (boo). GDX however does provide support for (a simple form of) this in the camera classes (yay).

Ray picking == shooting a line (ray) from camera through a point on viewport (calculatable from x,y screen coordinates by transforming them into world coordinates) and on into objects in the viewing frustum to see what the user clicked on.

In class CameraInputController's create() method:

   camController = new CameraInputController(cam) {
      private final Vector3 position = new Vector3();
      @Override
      public boolean touchUp(int screenX, int screenY,
                             int pointer, int button) {
         Ray ray = cam.getPickRay(screenX, screenY);
         for (int i = 0; i < instances.size; i++) {
            ModelInstance instance = instances.get(i);
            instance.transform.getTranslation(position);
            if (Intersector.intersectRayBoundsFast(ray, position,
                                           box.getDimensions())) {
               instances.removeIndex(i);
               i--;
            }
         }
         return super.touchUp(screenX, screenY, pointer, button);
      }
   };
   Gdx.input.setInputProcessor(camController);
Note that there is probably a less dorky example that would demonstrate selecting the "nearest" of the hit objects, rather than just deleting them ALL from the instances array.

Terminology Variants

I am going to use CVE (collaborative virtual environment) and MUVE (multi-user virtual environment) as synonyms, pretty much,

Highlights of Di Giuseppe chapter 3

Structure of the game

   -------------------             -------------------
   | DesktopLauncher |             | AndroidLauncher |
   -------------------             -------------------
                    \              /
                     \            /
                      ------------
                      |   Core   |
                      ------------
                            |
                  --------------------
                  | Main Menu Screen |
                  --------------------
                            |         ----------
                            |         | GameUI |
                            |       / ----------
--------------------   -------------
Leaderboards Screen|<--|Game Screen|
--------------------   -------------
                                    \ ---------
                                      | World |
                                      ---------

Code Discussed in Book

Ashley

an Entity Management System
See this page. Remind me, how does "entity" relate to "actor"?
provides classes like Entity, Component, EntitySystem
mostly, you subclass them and provide trivial methods. So vague that it makes you wonder if need it at all or could just use Array<*> etc. glue data structures
"good practices when dealing with a lot of similar dynamic objects"
flyweight design pattern
See this framework overview; diagram is obviously created using PlantUML:
Despite my misgivings, according to the Ashley documents, Component classes such as ModelComponent are supposed to be data bags with no behavior. Despite the fact that this is in general considered a bad OO practice, it might be justifiable as it sort of fits the flyweight design pattern.

The full Ashley EntitySystem looks like:

public abstract class EntitySystem {
       public EntitySystem();
       public EntitySystem(int priority);
       public void addedToEngine(Engine engine);
       public void removedFromEngine(Engine engine);
       public void update(float deltaTime);
       public boolean checkProcessing();
       public void setProcessing(boolean processing);
}
Basically, in addition to an update(deltaTime), entity systems will have an insert/delete on an engine, a setter and gett for a boolean flag processing, and an optional priority.

Bullet Physics

Bullet (bulletphysics.org) is a 3D physics engine
collision detection and rigid body dynamics library
much conceptual overlap with box2D
more powerful, but heavier weight
Used in Hollywood! as well as GTA series and others
C++ via JNI, open source
a portability problem. not supported for HTML5 targets, possibly some others
gdx classnames use prefix "bt" for bullet
try for 1:1 match w/C++
broad phase and narrow phase
divide space into "cells", only check for collisions in the same (or adjacent?) cells
AxisAligned Bounding Box
sort on one axis at a time, only have to check other dims on overlappers

Rigid Bodies

lecture 11

Collision Shapes

MotionStates

Some kind of passive means of Bullet passing updates to your program without you having to explicitly update all objects' positions every update. See the bulletphysics webpage on MotionStates:

BulletPhysics has gone through a lot of different versions, and the above link might not be for the same version of Bullet you have, so although MotionStates info may be the same for whatever version you've got, you might want to check.

Bullet Physics Demo: Ball vs. Floor

The construction of a ball, and a floor for it to bounce off of, must needs include the graphics as well as the physics.
models = new Array<Model>();
modelbuilder = new ModelBuilder();
// creating a ground model using box shape
float groundWidth = 40;
modelbuilder.begin();
MeshPartBuilder mpb = modelbuilder.part("parts", GL20.GL_TRIANGLES,
                                 Usage.Position | Usage.Normal | Usage.Color,
                      new Material(ColorAttribute.createDiffuse(Color.WHITE)));
mpb.setColor(1f, 1f, 1f, 1f);
mpb.box(0, 0, 0, groundWidth, 1, groundWidth);
Model model = modelbuilder.end();
models.add(model);
groundInstance = new ModelInstance(model);
// creating a sphere model
float radius = 2f;
final Model sphereModel =
               modelbuilder.createSphere(
                  radius, radius, radius, 20, 20,
                  new Material(ColorAttribute.createDiffuse(Color.RED),
                               ColorAttribute.createSpecular(Color.GRAY),
                               FloatAttribute.createShininess(64f)),
                  Usage.Position | Usage.Normal);
models.add(sphereModel);
sphereInstance = new ModelInstance(sphereModel);
sphereinstance.transform.trn(0, 10, 0);

Bullet Physics: State Variables

In the application class, lots more variable declarations.

Nair's explanations come in a later section, after the code is presented. You might want to skip the code section and read the explanation first.

What are you supposed to see here?

private btDefaultCollisionConfiguration collisionConfiguration;
private btCollisionDispatcher dispatcher;
private btDbvtBroadphase broadphase;
private btSequentialImpulseConstraintSolver solver;
private btDiscreteDynamicsWorld world;

private Array<btCollisionShape> shapes = new Array<btCollisionShape>();
private Array<btRigidBodyConstructionInfo> bodyInfos =
          new Array<btRigidBody.btRigidBodyConstructionInfo>();
private Array<btRigidBody> bodies = new Array<btRigidBody>();
private btDefaultMotionState sphereMotionState;

Brief explanation of Family.*

Last time we saw the line
entities =
   e.getEntitiesFor(Family.all(ModelComponent.class).get())
In Ashley, this fetches the entities within Engine e that contain a ModelComponent. The method "all" is normally used with multiple parameters to fetch entities that have several specified components. Family.one and Family.exclude are other common filters appled to entities.

Resuming Discussion of Bullet

. The example I started cranking last time was from Nair/Oehlke's libGDX book. Perhaps it was an unnecessary distraction and I should stick to di Giuseppe et al. But, it may be useful to compare different applications and see what is common and what differs.

Rigid Bodies Initialization

Most create() codes in libGDX tend to be straightline code, but object orientation places such importance on objects and their relationships that it is perhaps worth diagramming the data structures here.
// Initiating Bullet Physics
Bullet.init();

//setting up the world
collisionConfiguration = new btDefaultCollisionConfiguration();
dispatcher = new btCollisionDispatcher(collisionConfiguration);
broadphase = new btDbvtBroadphase();
solver = new btSequentialImpulseConstraintSolver();
world = new btDiscreteDynamicsWorld(dispatcher, broadphase,
                                    solver, collisionConfiguration);
world.setGravity(new Vector3(0, -9.81f, 1f));

// creating ground body
btCollisionShape groundshape =
                    new btBoxShape(new Vector3(20, 1 / 2f, 20));
shapes.add(groundshape);
btRigidBodyConstructionInfo bodyInfo =
           new btRigidBodyConstructionInfo(0, null,
	                                   groundshape, Vector3.Zero);
this.bodyInfos.add(bodyInfo);
btRigidBody body = new btRigidBody(bodyInfo);
bodies.add(body);
world.addRigidBody(body);
// creating sphere body
sphereMotionState = new btDefaultMotionState(sphereInstance.transform);
sphereMotionState.setWorldTransform(sphereInstance.transform);
final btCollisionShape sphereShape = new btSphereShape(1f);
shapes.add(sphereShape);
bodyInfo = new btRigidBodyConstructionInfo(1, sphereMotionState,
                                           sphereShape, new Vector3(1, 1, 1));
this.bodyInfos.add(bodyInfo);
body = new btRigidBody(bodyInfo);
bodies.add(body);

Bullet's Impact on render()

world.stepSimulation(Gdx.graphics.getDeltaTime(), 5, 1/60.0f);
sphereMotionState.getWorldTransform(sphereInstance.transform);

Collision Events

public class MyContactListener extends ContactListener {
   @Override
   public void onContactStarted(btCollisionObject colObj0,
                                btCollisionObject colObj1) {
      Gdx.app.log(this.getClass().getName(), "onContactStarted");
   }
}
and in your game class's create():
MyContactListener contactListener = new MyContactListener();

Now, back to di Guiseppe and Space Gladiators

Instead of using the default motion state class:

lecture 12

Reading Assignment

Read Networked Graphics Chapter 3.
This is a fairly detailed introduction to the internet. We have already covered several topics from here, and I will be selective/terse about what else to spend class time on from Chapter 3. But, there are probably some concepts and definitions there that I will cherry pick and add to lecture notes on Friday or next week.

JSON Commentary

If you need help, you might play with the following:
  1. download JSON-java from github
  2. unpacked a .zip and moved it into subdirectories org/json. You might need slightly more thought has to placement or package naming.
  3. compiled all its .java files with javac *.java
  4. checked that import org.json.*; worked
  5. wrote a simple json test program
Comments:

Discussion of simplifying from CVE-based JSON

models vs. models
In our bullet examples last time, there were parallel models for what to render vs. what to do physics simulation on.
render model
(collection of) 3D models. Maybe just an Array (brute force) or maybe a fancy data structure of its own, or maybe just a special traversal of the next model type.
logical model
(graph of) the spaces in the game, and the entities within them. The logical model is not about logic in a PROLOG sense, but unlike the render model it is concerned with enabling all kinds of game mechanics, not just drawing graphics.
JSON file from HW#2 is our start on a logical model
Structure of node==Room, edge==Door|Opening, inspired by MUDs
In a previous class we peeked at a sample json file, in which I complained about seeming inconsistency between walls and floor/ceiling:
 "texture": "csacwalls.gif",
 "floor": {"class": "Quad", "texture": "csaccarpet.gif"}
It would be more consistent to say something like
 "walls": "csacwalls.gif",
 "floor": "csaccarpet.gif",
or
 "walls": {"class": "Quad", "texture": "csacwalls.gif"},
 "floor": {"class": "Quad", "texture": "csaccarpet.gif"},
It is fine to apply implicit semantics to fields in your logical model
Such as saying that a wall field whose value is a filename will result in the creation of a Quad object with texture field containing the filename.
Texture tiling repeating might also matter for these wall/floor/ceiling
If you just stretch csacwalls.gif over a entire surface, the original image will be huge, or very low resolution. Better to repeat a small image a number of times.
Should (u,v) texture coordinates be in a logical model?
At least for CVE, the answer was no, the same textures are used in many rooms, there is a separate texture info database that knows for each texture image how big the original object is. The # of repeats is then calculated for any logical wall you apply that texture to.

csac.json

Resuming Discussion of Space Gladiator Ch. 3 Code

Familiarize yourself with the libGDX Vector3 class in order to be able to read some of the code here.

The Rest of the Space Gladiator Code

lecture 13

Announcement

[Steed/Oliveira] Chapter 3

Beyond slides, I just looked for things that matter, that might not have been already said.
client
both the machine and the process a user is using to perform network-based computing task(s)
host
a computer running a server
server
both the machine and the process that are providing services on one or more ports
peer
machines and processes that connect directly to each other, sans server
protocol
like a file format: lexical and syntax rules for computer communication. Generic ones might consist of a header and a data payload. Applications protocols may be arbitrarily more complex.
protocol stack
protocols are usually built on top of other protocols, in layers. Layers add latency and bandwidth overhead.
How many layers are in a network protocol stack?
There is a 7 layer OSI model. [S/O] Figure 3.1 shows 5 layers. application layer (app protocol), TCP/UDP layer (ports), IP layer (point to point across multiple networks), network layer (gateway to gateway across a single network), and physical layer (wire, radio, etc.).
application protocol considerations: compactness, robustness, efficiency
Compactness
Can trade CPU+latency+code complexity for compactness (compression); matters mainly for streaming media, but for networked games it will affect your scalability limits, e.g. of how many avatars can run around at the Orgimmar bank before clients start to fail.
Robustness
Robustness has grown in importance over time; the internet has grown less reliable at the same time as our society has become completely dependent on it. Considerations include: sentinel values in the protocol, recovery from errors, multiple connections and reconnections, server redundancy and failover, compensating for temporary problems...
Efficiency
We can talk about signal/noise ratio, or how much functionality per byte is delivered, or how repetitive or redundant a protocol is.
ipconfig and ifconfig
Tell you your IP#, how connected, etc. Usually I have to type /sbin/ifconfig on Linux
Try Wireshark
Might help debugging later this semester
Standard ports and services
Mentioned before, Table 3.1 gives several of them. It is interesting how many are now obsolete and how many remain ubiquitous. There are a fixed # of reserved ports. How long before we need to garbage collect some of the obsolete ones? In most cases, an obsolete protocol is not completely dead, it is just deprecated/replaced by a better one, so we cannot actually retire/reuse the port. Examples:

Obsolete Protocols Live Protocols
service port
ftp 21
smtp 25
finger 79
http 80
service port
ssh 22
dns 53
pop 110
imap 143
https 443

DNS -- how much do you know?
it is perhaps the most common point of attack and failure. Apps need to minimize dependence on it. DNS names are supposed be cached, but only for 24 hours at a time.
Telnet as a testing/debugging tool
At least for text-based TCP app protocols, telnet can be educational
Long discussion of TCP and UDP
Main thing you are supposed to understand is that TCP is full of ACKs and NAKs, trading high, highly variable latency for a byte stream abstraction. UDP lacks ordering and reliable delivery. Easy for an application to guarantee order by just numbering and dropping out-of-order packets. Harder for a UDP application to achieve reliable delivery, but one could selectively ACK/NAK some packets and not others to try and have the best of both worlds over UDP.
Network Layer -- IP
I implemented IP once, it was kind of fun. It was about two things: routing and fragmentation.
Routing
In a typical client computer the only routing decision is: is this packet for another process on this machine (127.0.0.1), or does it go out on the network? For network routers/gateways, including specially configured client computers, there are multiple outgoing network connections and a routing table is used to say for any given IP #, which outgoing connection to send on. When a node on the internet fails, routing tables get updated to route around the failure.
Fragmentation
IP also handles differences in packet sizes. For example, there are 4K (variable size) ethernet packets, but sometimes you want to route them through a network with smaller packet sizes (maybe a fixed 53 byte packet). So fine, IP will break a single packet into 80 packets to use a lower level network, and reassemble at the other end.
Ping, traceroute
Two tools you should know about and potentially use as needed
DHCP
Dynamic host control protocol, the way most clients get their IP#.
NATs
Do we need to talk about how these killed* peer-to-peer, or about hole punching?

On Textures and Texture Tiling

Calls like

wall = modelBuilder.createBox(...)
can be replaced by
modelBuilder.begin();
modelBuilder.setUVRange(0, 0, repeatX, repeatY);
modelBuilder.part("box", GL10.GL_TRIANGLES,
                  attributes, material).box(width, height, depth);
wall = modelBuilder.end();
Where did the material come from? Instead of a color material, try a textured material:
Texture walls = new Texture(Gdx.files.internal("Objects/walls.jpg"));
walls.setFilter(TextureFilter.Linear, TextureFilter.Linear);
walls.setWrap(Texture.TextureWrap.Repeat, Texture.TextureWrap.Repeat);
material = new Material(TextureAttribute.createDiffuse(walls));

lecture 14

How is HW#2 Going?

Intro to 3D Modeling (di Giuseppe Chapter 4)

The four tasks for producing 3D assets are: modeling, texturing, rigging, and animating.

3D Modeling "Theory"

lecture 15

Announcements

Mailbag

In the sample file you provided each room has an x, y, and z coordinate, and also a width, height, and length. From my understanding, width, height, and length correspond to the size of the walls, ceiling, and floor, but I'm unsure what the actual x, y, and z coordinates are. Are they the coordinates of one corner of the room or the center or something else?
x and y and z are the location of the northwest corner of the room, in "world" coordinates, relative to some abstract origin point that might be the northwest corner of the entire model space. x and z are positive going east and south. Their effect is to describe the positions of rooms relative to each other. y is vertical position. the ceiling is a horizontal plane at coordinate y+height.
My second question has to do with the decorations. Each decoration has 12 coordinates and I remember finding the function that corresponds to this one day in class and but I misplaced it and can't seem to remember what it was.
Decorations have twelve numbers corresponding to four (x,y,z) positions of the corners of a rectangle. They must be planar.

Blender

Blender comes from blender.org
It has a eurocentric open source flavor to it.
Current version is 2.82
But any 2.7x or 2.8x is probably good for this class
Blender has always been...humbling for me.
Feels like Colonel Oakes at 0:50

lecture 16

Lecture #16 was a guest lecture overview of Blender 3D model operations.

lecture 17

Lecture #17 was a guest lecture overview of Blender texturing-by-painting, UV mapping, and the like.

lecture 18

Where we are at in the Course

Discussion of HW#3

The homework asks you to make

More broadly, I am trying to incrementally get us from Di Guiseppe one-player FPS to MMO-style multi-player. There are architecture questions.

Where should player state reside, authoritatively?
Probably on the server, cached read-write on the owning client, cached read-only on other clients.
How would we do that?
Probably by developing new class(es) on the server, and new network messages that read/write that state for appropriate game events/semantics
What about monster spawning, and monster state?
In Di Giuseppe there is one monster that respawns instantly on the client whenever it is killed. What should the multi-player game do?
What network messages are needed for the combat system?
Almost gave you this as part of HW#3, but had mercy.

Blender - a few notes

left click == set the cursor, "point of action"
but mouse is in 2D space, where in 3D space?
Blender likes to use the middle mouse button
Blender's UI is highly "modal"
Six major modes, starting from Object Mode
Moving (Translate). "g" key + mouse move until click
mouse is in 2D space, where do you go in 3D space? maybe depends on camera and cursor. Can also translate one dimension at a time, or enter raw numeric values.
Scale. "s" key + mouse move. Can also scale one dimension at a time.
Rotate. "r" key + mouse move. Can also rotate one axis at a time.
Subdividing objects via loop cut and slice is pretty easy.
Horizontal and vertical slices, maybe defined by your camera position.
E (extrude) is a fast alternative to a lot of creating attached objects and then subdividing them with loop cut and slice...
but you probably want to loop cut and slice enough first in order to restrict the scope of what you are extruding.
Plan to toggle back and forth between Edit Mode and Object Mode a lot.
Need object mode to select which primitive in order to select a face, edge, or vertex to edit. Common bug: trying to select a face on a different object than the currently selected object. Select object first.
Effects of translating e.g. a face were non-intuitive.
One must play with it awhile.

Sketch Before you Model

Some more 3D "theory"

Free Models

Texture Theory

Preparing a Blender Model for Texturing (UV Mapping)

Textures in Blender

Dr. J Reflects on Textures

These may or may not be true statements! But they seem to be true, from personal experience for non-raytraced (classical) 3D rendering. Real-time raytracing may change the game in our lifetimes.
texture everything
non-textured OpenGL images are a joke...or at least, they are just placeholders until you texture. mixed textured and non-textured images usually mess up on the lighting; for example in CVE, non-textured tables and chairs often look wrong. Texturing everything used to mean that you will run out of texture memory on the GPU, so you'd have to reduce texture sizes to make things fit.
textures for walls/floors/ceilings must be "tiled".
repeat a modest texture over and over to avoid exceeding texture budget.
its all about the seams
on both characters and surfaces, discontinuity grates. Tiling means edges of textures must match their opposite.
texture at multiple resolutions (mipmapping)
Costs ~1.33x texture memory, avoids lots of bad artifacts, if you are lucky and code well, library/API will do most of the work
in order to have any lighting effects at all, blend
"fairly easy" to mix texture and material surface

lecture 19

Discuss How to Solve the HW#2 Grading Dilemma...

Options include:

...and Discuss What to Do About Server

One of you wrote, roughly, "I have no idea what I am doing"
fair enough. We haven't read and learned enough yet to write a good server.
Options include:

[Steed/Oliveira] Chapter 4

lecture 20

Where we are at in the Class

Rabin Network and Multiplayer Theory/Basics

Lots of good juicy material here. Did we previously look at this? My note say maybe we did slides 1-12?

More Reading About Networking

If our textbook, isn't cutting it for you, consider one or more of the following supplemental resources:

Things I Think I Know About Networking

Socket I/O can be fairly easy, except:

On Writing a Server

Discussion of Basic Client and Server Code

A Single-Threaded Server

Also: a fine example of why Dr. J would rather talk about networking using Unicon:
Java Unicon
/*
 * Simple Java TCP Server. Adapted from
 * https://systembash.com/a-simple-java-tcp-server-and-tcp-client/
 * which in turn is from "Computer Networking" by Kurose and Ross.
 */
import java.io.*;
import java.net.*;

class TCPServer {
 public static void main(String argv[]) throws Exception {
  String line;
  String capitalizedSentence;
  ServerSocket welcomeSocket = new ServerSocket(6789);

  while (true) {
   Socket n = welcomeSocket.accept();
   BufferedReader inFromClient =
    new BufferedReader(
     new InputStreamReader(n.getInputStream()));
   DataOutputStream outToClient =
    new DataOutputStream(n.getOutputStream());
   line = inFromClient.readLine();
   System.out.println("Received: " + line);
   System.out.flush();
   capitalizedSentence = line.toUpperCase() + '\n';
   outToClient.writeBytes(capitalizedSentence);
  }
 }
}
procedure main()
   repeat {
      if not (n := open(":6789", "na")) then stop("server: no socket")
      while line := read(n) do {
         write("Received: ", line)
         write(n, map(line, &lcase, &ucase))
         }
      }
end

lecture 21

Take Home Midterm

Finish Up Discussion of Single-Threaded Server Example and its Client

A server without a client is only half the story.

Simple Java Client Unicon
/*
 * Simple Java TCP Client. Adapted from
 * https://systembash.com/a-simple-java-tcp-server-and-tcp-client/
 * which is from "Computer Networking" by Kurose and Ross.
 */

import java.io.*;
import java.net.*;

class TCPClient {
 public static void main(String argv[]) throws Exception {
  String sentence;
  String modifiedSentence;
  BufferedReader inFromUser = 
     new BufferedReader(new InputStreamReader(System.in));
  Socket clientSocket = new Socket("localhost", 6789);
  DataOutputStream outToServer =
     new DataOutputStream(clientSocket.getOutputStream());
  BufferedReader inFromServer =
     new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
  sentence = inFromUser.readLine();
  outToServer.writeBytes(sentence + '\n');
  modifiedSentence = inFromServer.readLine();
  System.out.println("FROM SERVER: " + modifiedSentence);
  clientSocket.close();
 }
}

procedure main()
   if not (n := open("localhost:6789","n")) then
      stop("no socket")
   while line := read() do {
      write(n, line)
      write(read(n))
      }
end

Going Multi-User

A server process must never block waiting for input from one client; other clients perceive that as being hung.

Consider this a discussion of TCP to complement [S/O Ch. 4] which only discusses UDP.

Options for going multi-user:

  1. "Instant" connections
  2. One thread/process per user
  3. Event driven
  4. Hybrid

Instant Connections

make every connection very brief (say, <= k ms).
Only a few internet protocols fit this model (finger, daytm). It fits when the server requires no interaction to respond to the request. Early web servers almost fit this this model(!), but not for long.
Look again: the Java TCP server above is almost this technique.

What would you have to do to make it more completely fit?

One thread/process per user

fork a process (later: spawn a thread) per connected user.
So each thread manages one user's network connection. This is especially viable for independent services like web servers, where the service is embarrassingly parallel.
The for-a-thread-for-each-client model is addressed semi-nicely by the previous LibGDX chat client and server suggested for your HW#2 use.
Did it work for you as-given? If not, what went wrong?

Event driven

We discuss non-blocking I/O first, then the select() function for handling multiple connections within a single thread.

Non-Blocking read()

The point here is to read input data that is already available, and not wait for it, so that you continue to provide service to other users instead of waiting.

You have to ask the operating system to put a socket in non-blocking I/O mode.
C Java
		if ((new_fd = accept(sockfd, (struct sockaddr *)&their_addr, \
       		   	    &sin_size)) == -1) {
       			    		perror("accept");
							}
        fcntl(last_fd, F_SETFL, O_NONBLOCK); /* Change the socket into non-blocking state	*/
        fcntl(new_fd, F_SETFL, O_NONBLOCK); /* Change the socket into non-blocking state	*/
    //... after an accept(), on a SocketChannel
    sc.configureBlocking(false);

select()

C select() java select()
int select(int maxfd, fd_set *readsset, fd_set *writeset, 
           fd_set *exceptset, const struct timeval *timeout);
Returns: positive count of descriptors ready, 0 on timeout, -1 error Arguments:
  1. maxfd: maximum number of descriptor ready.
  2. readset: Descriptor set that we want kernel to test for reading.
  3. writeset: Descriptor set that we want kernel to test for writing.
  4. exceptset: Descriptor set that we want kernel to test for exception conditions.
  5. struct timeval{
    long tv_sec;
    long tv_usec;
    };
    
    if timeout==NULL then wait forever
    if timeout == fixed_amount_time then wait until specified time
    if timeout == 0 return immediately.
  6. timeout: How long to wait for select to return.
    Selector selector = Selector.open();
    ServerSocketChannel ssChannel = ServerSocketChannel.open();
    ssChannel.configureBlocking(false);
    ssChannel.socket().bind(new InetSocketAddress(hostIPAddress, port));
    ssChannel.register(selector, SelectionKey.OP_ACCEPT);
    while (true) {
      if (selector.select() <= 0) {
        continue;
      }
      processReadySet(selector.selectedKeys());
    }
...
  public static void processReadySet(Set readySet) throws Exception {
    Iterator iterator = readySet.iterator();
    while (iterator.hasNext()) {
      SelectionKey key = (SelectionKey) iterator.next();
      iterator.remove();
      if (key.isAcceptable()) {
        // ... go ahead and do an ssChannel.accept()
	// which gives you a SocketChannel, not a socket
      }
      if (key.isReadable()) {
        // ...key.channel() gives you SocketChannel
	// ...do a non-blocking read from SocketChannel
      }
See also:

I will see what I can find regarding a larger/comparable Java example.

Hybrid

By the way, these are not really mutually exclusive.
to scale up to more users, one may combine multiple earlier techniques.
Production-grade web servers like Apache use many processes, each of which has many threads, to handle large numbers of users.

lecture 22

Java Thread Communication and Monitors

As you recall, your options in a server are to either do a thread for each network connection, or make sure your server never blocks.

Evolution of the CVE Virtual Environment

We'll talk a bit about the CVE virtual environment in this class, using it as a "pseudocode example" of multi-user virtual environment architecture. Consider it as a source of several lessons learned for me, and ask hypothetically, how easy (or difficult) would it be to implement this same program in Java/libGDX.

Some notes are here.

lecture 23

CoronaVirus Update

Net Client Code

Net Server Code

Presented earlier; did we miss anything in here?

How Much Work Should be in the Server?

A big-picture question for the 2nd half of the class is:
For a multi-user game, how much work goes in the client, and how much in the server?

lecture 24

Class on 3/23/2020 is Via Zoom

HW#3 Extension

Homework #3 is now due Friday, 11:59pm. Deliver some working network code, hopefully showing multiple users (and their movements) visibly in some form. Include reasonable instructions on how to run what you deliver.

Midterm Exam Results

I was not too thrilled with my take-home midterm, but some of you wrote much stronger answers than others.

grade distribution:

94
92
90
89 89
--------------------------- A
87
86
82 82
--------------------------- B
70

Highlights from Student Midterm Solutions

As a test, this part of the lecture (4 minutes) has been recorded and placed on this link, provided through stream.uidaho.edu and UI's access to Microsoft's stream service. Please let me know if you can access the video or have difficulties, and whether you want (or need) the class lecture material to be accessible in this way. A summary of the video content is given below.

Best answers on the 3D vs. 2D question managed to say something more deep than "3D is more difficult to program than 2D", although that is certainly true. It was good if you noted that the mechanics in 3D were often more complicated, and that 3D often spends a much higher percentage of its budget on assets. Several of you noted that many game genres can use either 2D or 3D, but some genres are tied to one or the other. For example, it might be hard to imagine a 3D side-scroller, or a 2D first person shooter (although there are lots of 2D "shooters").

Best answers on multi-user vs. single-user managed to say something more deep than "multi-user is more difficult to program than single-user". Some folks said, or almost said, that playing with (or against) other humans reduces the need in multi-user games for good AI -- because the other humans constitute nonartificial intelligence.

Where do we go from here

[Steed/Oliveira] Chapter 5

We have a lot of chapters left in the Steed book. I am going to point out just the cool stuff.
Multi-user games almost have to be client/server
Mainly to limit or eliminate cheating; a side benefit is that moving some compute tasks onto server reduces hardware requirements for users. A downside is that servers are not free. Most MMOs have died because they can't handle server costs. I miss City of Heroes.
Coping with Scale
The book is right that many of the main computer science issues in networked games surely have to do with scaling up to more graphics and more users. The book's main concern is: too many network messages. I will present their scaling techniques with exaples from my own experience.
Scaling Technique #1: High-level descriptions
client/server should transmit fewer, high level messages rather than more, lower level messages.
Scaling Technique #2: Clear separation of local and remove behavior
Not every machine needs to be informed (or do any compute work based on) every message
Scaling Technique #3: Locality of action
In a large environment, only objects that are nearby need to be updated.
Scaling Technique #4: Most data is effectively static
Static data may be pre-installed on local clients, and loading it does not require any race-conditions or server synchronization, beyond the issue of (potentially) downloading updates.
Typical NG (Toontown) connects to server, server connects to database.
Thin client? or...
Thin client == (Almost) Everything on Server. All details have to be sent from server all the time. Used in controlled networks (e.g. military sims), not really OK for commercial MMOs.
...or fat client?
Fat client == Client replicates massive state, server just sends incremental updates. But client has to function whether or not those updates are timely. Lag will often result in embarrassing visuals, client may estimate or "fake" what networked entities are doing when packets are delayed or dropped.
Ownership and Locking
Should objects in the game be ownable/controllable by individuals? Suppose there is a health pack on the ground, and two clients both grab it at the same time? Sometimes the simplest way to prevent two users' actions from mutuall contradicting each other is to implement a locking mechanism. Just don't leave yourself vulnerable to deadlock.
persistence
Whether you use a database or not, your game probably needs a persistence strategy. In this class you probably need to keep it simple.

lecture 25

Intermittent Internet, Audio Distractions from Home

Practice Raising your Hand

If you text chat me enough, and are patient, I will probably respond to that, but if you click on the Attendees (Participants) button, that window also has "raise hand" button that you can toggle in order to raise your virtual hand, which might be a bit more in-my-face than the chat window. Miguel already knew how to do it last class. Let's practice it now; see if you can find it and raise your hand and be recognized at least once in today's class.

CVE: sample new user's session transcript

Since we are not requiring you to connect to / use the CVE server this semester, the point of this look-see is to give you ideas for your own game and its protocol. You do not have to do things exactly like this, but you may want to do some equivalent things for a subset of this.
\login -cvecypherusername password
The login command uses a crude cypher so that username and passsword are not transmitted in plain text. For a new user account creation, the username and password are for the admin user ("system") on the CVE system, since privileges are required to create a new account. Note the weird implementation of the -cvecypher option in CVE, which has no space after it (bad student code).
\newuser username password FirstName LastName email affiliation
Ironic that the new user message is sent unencrypted. LOL. I will not let that happen next time.
\transfer filename filesize server
Client requests to upload a file of a given name/size. The "filesize" part of this could be used to decide whether to open a separate port/connection to do the transfer or just do it inline within the current connection. The "server" part of this is a bit fishy.
\logout
This command is self-explanatory. Server will close the socket. You will presumably have to reconnect. In CVE, the \newuser command is issued by the front-end process that also downloads updates. You do not have to implement a separate client front-end process or an update mechanism.
\login -cvecypherusername password
The real login, by the real client
\version 8.9
If your multi-user game lives long enough, clients and server need to be the same (or compatible) version
\users
request the list of users
\setip
Tell the server what IP the client thinks they have.
\checkforupdates n
Uses a timestamp to decide if newer file(s) are available
\updatelocations username
when someone logs in, others get informed, and the user may become visible if they are in the same room, or near
\back Online (twice?!)
the \back command is part of the AFK system, which allows other clients to know when you are AFK.
\latency n
allows client to check round trip time to server and back. Not just a one-time hardware test on startup; the client may want to keep user informed as to how their net connection is doing. How can a user use this information? How should it be presented?
\move username body x y z a(often at least two in a packet)
set the whole avatar's position and orientation (only 1D of angle)
\move username part right_arm fb 10
part moves apply transformations to rigging/bones under program control. In this example, set the right arm rotation to 10 degrees
\updateMode username 3D
CVE client has at least a "3D" mode and a (2D top-down) "Map" mode, maybe also modes for the user being in the IDE. Game design question: To what extent should the server or other clients know when you are looking at your map, or programming instead of seeing the 3D environment?
Server's responses to client include: Extra protocol commands needed for a game like a first-person shooter:

lecture 26

Recording of class in 3/27/2020

Let's Discuss HW#4

Two homeworks left this semester. Make sure I am not asking too much, and make sure what is requested is understandable. If you are not doing a FPS, you are expected to do "similar" amount of functionality (new network messages for new game mechanics).

[Steed/Oliveira] Chapter 6: Sockets and Middleware

Middleware is when large parts of your "game code" could actually be written generically as libraries that can be re-used by other games. In an extreme case, some middleware functionality might become so ubiquitous that programming languages support it as built-in parts of their runtime system; for example, it could be bundled into JVM or the Unicon VM instead of externally loaded code.
First role of middleware: Operating System portability
libGDX .net module is middleware. Unicon's networking facilities are middleware. Fine.
Second role of middleware: high-level connection management
For example, a peer-IP directory service matching up competitors or partners is not game-specific. Another example might be a net API that autorecovers from lost connections.
Third role of middleware: support for formatting and sending messages
This "protocol support" might include serialization of structures, or encoding of binary data for ASCII transmission. It might also include notions of packet aggregation and/or buffering.
Fourth role of middleware: provide a complete solution for networking
This would be for some particular common category of game, such as an FPS. A person writing a new FPS might just use it as-is.

C socket API

Some C/C++ Middleware Libraries

Chapter 6 describes 3 out of dozens that could be used. We will not use any C/C++ libraries in this course.
HawkNL
Never heard of it. Thin layer simplifying writing sockets more portably. Probably not worth it.
SDL_NET
SDL is a huge popular game library. SDL_NET seems to not quite be a part of SDL, just intended to be used with it. A bit higher level than HawkNL.
ACE
A more high-powered C++ library for network + related stuff (threads, logging, services, queues).

Your 428 server compared with the CVE server

A peek at JSONified jeb.json (.txt)

~2K LOC, perhaps around 50 "rooms" (connected cuboids). Validated by feeding it into Firefox, the browser I am quasi-boycotting. Probably all of JEB, the only thing extra in CVE is the outdoors front area. Textures are in jeb2/ and jeb2/textures/. They still need converting, probably all .gif and .png should be converted to .jpg for libGDX maximum portability purposes (the HTML5 target, at least, is picky about image format).

lecture 27

Pass/Fail Option; Later Drop Deadline

Steed Chapter 7

DIS
A protocol for interconnection of simulators, like flight simulators.
Protocol Data Unit (PDU)
1993/1995 IEEE standard most recently updated in 2015
X3D
open ISO standard for 3D scenes

Lecture 28

Record the Lecture Please, Dr. J

Someone needs to say it.

Steed Chapter 8: object sharing systems

object location transparency
one can make writing the client easy by making it look like setting an object's fields, or calling its methods, is local, when in fact the object is remote? (Semi)Famous research programming languages explored this heavily in the 1980's. Do any modern languages do it? Which ones?
(illusion of virtual) shared data structure
in some sense, if all clients can just do operations on a shared data structure, this shifts the emphasis (in the code/logic) from network messages and events back to the application domain algorithms.
object distribution
instead of replicating all the state of all objects on all machines, if object state is large but network bandwidth is available, it would be possible to host large numbers of objects on individual machines, and allow other machines to access via proxy and/or remote procedure call
object mobility
some distributed systems allow objects to move from machine to machine, for example if they are being used mainly by a remote machine, move the data to where it is being used the most.
sharing policy
Steed makes a big point: do you send a whole object every time one of its member variables change? Do you have specialized getters/setters for each member variable, that trigger network messages? Packet aggregation is a must, but some information must be sent far more frequently than other information. Aggregating several variables out of same object == might as well serialize and send whole object. Aggregating several variables out of several objects == overhead of identifying each object to be modified may exceed data payload. Steed recommends separating out into separate classes, those pieces of data that have to be updated frequently, from those updated only occasionally.
sharing entities vs. scene graphs
sharing Application-level entities has certain advantages, but at least one major initiative ("Distributed OpenInventor") tried out distributed sharing of scene graphs, a relatively lower-level detailed description of a 3D scene to be rendered, comparable I guess to X3D, since X3D is a descendent of VRML which descends from Inventor/OpenInventor. A modern version of this would thus be a distributed shared-memory API for updating an X3D-based data structure so all clients see the same graphics. Note that in most games, though, clients do not see all the same things, and level-of-detail and location-based algorithms may reduce how much data is actually shared to a demand-based subset of the whole scene. Maybe those semantics could be baked into a shared X3D SceneGraph library.

Lecture 29

Office Hours Today Rescheduled

A Ph.D. student of Dr. Marshall Ma's is doing his Ph.D. proposal defense this afternoon at least from 1:30-2:30 and it may well go until 3, meaning at least half and probably all of today's office hours will be eaten up. If you need to consult me, send me an e-mail and suggest a day/time, I will be glad to help you if I can.

Where we are at in Lectures

Record the Lecture Please, Dr. J

Someone needs to say it.

Steed Chapter 9

Remote Method Call (RMC, RMI)

Big technical issues in implementing remote calls usually revolve around how do you identify what function is to be called, what machine it is at, how to pass parameters of all types, and how do you handle return values.

ONC-RPC

XML-RPC

CORBA

DIVE

A peek at one of the first important multi-user 3D applications from the research community. Predates commercial gaming MMOs by a long time (6+ years), predates the WWW, etc. Developed at SICS (Sweden) but international in scope.

Lecture 30

Homework Status

Lectures to Come

Steed Chapter 10

Distinguishing characteristics of networked games and virtual environments:
Graphics latency
Time it takes for a user input to result in visible change/response
Network latency
Time it takes for a sent packet to be received. Difficult to measure!
Round-trip time
how long it takes to send a packet and get an answer. Might include CPU time spent on server, time spent in network layers, compression and decompression, etc.

CVE Server main loop and message processing

I reviewed this code again this week in a research meeting w/ Dr. H and his Ph.D. student. I am about ready to build my next virtual environment implementation, using what I learned here, which is kind of like what your semester project needs to do.

CVE code organization

Even if you don't want to use the CVE server in your project, you might want to borrow ideas/structure from it.
cve/
cve/bin                - location of executable programs after compile/link
cve/dest               - build tools; needs updating
cve/src                - project source code
cve/src/client         - cve updater/login tool and main client
cve/src/common         - code used in both client and server
cve/src/ide            - code for collaborative IDE, part of client
cve/src/model          - code for virtual objects and behavior
cve/src/npc            - code for computer-controlled characters/bots
cve/src/server         - the main CVE server code
To see how the client interacts with messages received from the server, look at the CVE src/client directory, especially the dispatcher, an object that takes inputs from multiple sources and sends them as events (method calls) to appropriate objects.

Lecture 31

turn on the recording, Dr. J

HW#3 Status

Steed Ch. 10: Hole Punching

Getting peer-to-peer UDP packets through a NAT. See Figure 10-8 and the accompanying discussion. I am superkeen to try this out. It is said to not work if both clients are behind the same NAT. slides

Steed Ch. 11: Latency

fire-proof player problem
You shoot straight at the player, maybe your client even renders a hit, but due to latency you don't know the player had actually moved slightly out of the path of harms way. The server judges your shot to be a miss. The next packet you get is a move command for that players avatar, it is like they were invulnerable.
shoot around corners problem
Inverse of the previous problem, you might take hits that are delivered after you feel you are safely behind cover. It might feel like the other player or NPC is cheating somehow.
Simple ways to avoid problems: avoid problems by adding MORE latency Fancier ways of avoiding latency-related inconsistencies and such

Steed's Slides

Lecture 32

turn on the recording, Dr. J

Tips for making your HW so I can run it

Mostly, if you work on Windows with other folks who work on Linux watch out for: I have previously had trouble running libGDX applications using openjdk instead of "real" Sun Java. It's great if that's now fixed.

Latency Chapter, part 2

Steed's Ch. 11, part 2

Consider the ones we didn't get to last time.

Lecture 33

3D Models - Rigging and Animation

Joshua Dempsey generously shared with us on the topic of basic character rigging and animation, including inverse kinematics! Thanks, Josh! Josh has provided the following video links to support folks studying these topics.

Fundamentals and 5+ minute videos:

If you don't want to learn how to make a rig yourself, or want to create a much more advanced rig, use the Rigify addon: Very short / quick videos (the dude is kind of over the top and annoying, but does give information pretty concisely):

Lecture 34

Where we are at, and Where are we Going?

Steed Chapter 12: Scalability

Generally, even a highly tuned server written in C/C++ will hit a limit somewhere around 100-200 users. How do MMOs handle thousands? Chapter 12 considers various aspects.

Although a naive generic definition of scalability in multi-user games might be: "handling more users", Steed thinks we can do better than that.

quality of service
if the game is no longer playable or no longer fun due to high # of users then you have failed. Avoid at all costs.
easiest way to maintain quality of service is to not scale
Our text says to just put a cap on at 30 users!
scalability includes increasing "fidelity"
maintain quality of service as the graphics and network scale to more objects, whether they are users or not.

Area of Interest

Key to scalability is that most users don't need to see/hear what each other are doing. Options to segregate user traffic
separate world instances
separate geographic "zones"
separate "zones" for indoor areas
temporary zoned instances
track who is near to whom
instances for newbie zones
major activities where the 3D world events are not needed (map? crafting? AH?)
???

Awareness and Presence

Kinds of awareness information Extent of awareness Cohorts

Lecture 35

Spatial Models

Even the oldest multi-user environments like MUDs had some model of location, and more awareness for folks in the same location.
regular (grid, hex grid)
simplifies/reduces certain maths. For instance, avoid constantly checking and updating distances between users if we track their cells and only check distances in same cell or adjacent cells
irregular
e.g. to reduce awkwardness of borders. text examples show weakly and strongly irregular. Maybe more cells in crowded spots. Maybe no cells that split a building in half. etc.
dynamic
you can organize directly around (movable) entities instead of geography. data structures organize around ability to perceive/be perceived. This is not mutually exclusive with more static spatial models and is often layered atop a spatial model.

Visibility Models

base it on the viewing frustum?
too rapidly changing.
potentially visible set
several varying means of implementing this idea. best in an indoor environment; walls make things substantially occluded

Interest Specification

interest expression
when a user, or other entity, states/registers what they are interested in.

Interest Management

Interest Management in CVE

Server Partitioning and Load Balancing

Lecture 36

Seams and handovers

What does it take to do seamless transfers between servers/zones?
In a nutshell: pre-connecting to that neighbor zone before you get there.
It takes a while to establish a connection (if TCP) and it takes awhile to load assets and/or dynamic state for a new zone.
Mirror borders
overlap zones and mirror all events in the overlap area in both zones.
Proxies
near zone borders, one can spawn a copy of the entity in the other zone, and have it act as your proxy in that zone
an extreme seamless option
seems like it would be: have your users connected to several servers at all times, such as the zone you are in and all neighboring zones. Switching zones would be dropping no-longer-adjacent zones, and adding newly-adjacent zones. What would be the pros and cons? How hard would this be?
UDP?
Since UDP is connectionless, it might be a bit easier to make things seamless over UDP-based protocols. But then again, UDP doesn't necessarily deliver messages, and most programmers are not going to come up with a more efficient method of reliability than the internet engineers. I have "almost" decided in my next multi-user virtual environment to go with a design where I routinely setup both a TCP and a UDP socket any time I do a connection.

Steed's Ch. 12 Slides

We are just looking for interesting stuff in the slides that I might have skipped, or figures that help illustrate selected topics that I covered.

We did Steed's Ch. 12 Slides 1-15 or so.

Lecture 37

Obstacles noted and items needed

Java Version Issue Report

How well does libGDX work with Java 9-12, from Oracle or OpenJDK? In the past, the answer was: nope. LibGDX's emphasis on multi-platform portability means it will always focus on a lowest common denominator Java version.

Scalability, cont'd

Steed's Ch. 12 Slides, starting from slide 14. We got through slide 32.

Aura, Focus, Nimbus Model

For each media (communication type):
aura
max distance that entities can communicate in that medium. typically circles. intersection == potential awareness
focus
volume of space within which an entity observes
nimbus
volume of space within which an entity may be observed

Lecture 38

Working out the Details on New Protocol Commands

Addendum on [Steed] Load Balancing

entity-centered partitioning
moving entities to another server when you overcrowd
region-centered partitioning
altering the geographic region assigned to a server when it is overcrowded
If you ignore locality, you will just generate more network traffic by doing this, so the smallest unit of migration from server to server should probably be a set of mutually interacting objects.

Scalability, part 3

Steed's Ch. 12 Slides, starting from slide 32.

Lecture 39

Steed Chapter 13: Application and Support Issues

Security

For each of these we need not just a definition, but an example and/or an idea of what the developer can do to prevent it.
stealing from the game developer
what kinds of stealing can people do to the game developer?
stealing/hacking others' accounts
stripping characters' gear and selling it
camping
define and/or give an example. a special form of denial-of-service attack. Some games even allow players to camp other players! EQ II was that way...
farming
what's wrong with Chinese prisons forcing prisoners to farm WoW gold?
griefing
There are innumerable forms of harrassment performed online. How can it be prevented?

Cheating

exploting a system to gain an advantage
give an example
consequences
players will abandon a game that is not fair.
client-side
this is a single gigantic argument against dumb servers. Typically they either display more than they should, or they improve or replace the supposed "user input" mechanisms Approaches to prevent client-side cheating:
network level
since attacks may be performed by a man-in-the-middle, a lot of games may need to encrypt traffic at a cost of performance and/or increased latency. IPsec is mentioned but it sounds like overkill to me. stunnel might be used, or libssl. One has to learn about certificates/keys and such.
server-side
this includes both bug exploits and brute-force attacks on the server. Software engineers in general seem unable to avoid bugs in large software systems, but some bugs are much worse than others. Steed mentions a duplication exploit in EQ II that led to 20% inflation within 24 hours. I am not sure denial of service attacks count as cheating, but they can certainly make a game unplayable.
social
players collude or use multiple clients.

Steed's Slides

Lecture 40

Announcements

Using the CS428 Server for Semester Projects

The Server machine is cs-course61.cs.uidaho.edu. Please thank Victor House when you see him.

Steed's Chapter 13 Slides, but today I am only going to do Slides 1-5, and Friday none of them. More from Chapter 13 next week.

Non-Fatal Exceptions

Once while grading homeworks, I came across an interesting side-note:

Making Levels with Uneven Terrain

This topic arose in discussion of mob-relevant level design.

Uneven Terrain in CVE

This provided for comparison purposes. Maybe it is "for what its worth".

After starting with all rectangular box shapes, CVE was extended with two primitives to allow for uneven terrain: "ramps", ... and "heightfields".

Ramps

This is sloping object with trivial arithmetic needed for collision detection. Ramps might be smooth, or broken into steps. A "type" indicates whether it is north-south (type 1), east-west (type 2), or flat (type 3). From the CVE NMSU atrium:
Room {
	name  atrium
	x 70.90000000000001
	y 0
	z 23.3
	w 13
	h 4.05
	l 9.199999999999999
	texture dat/nmsu/texsmall/wall2.gif
	floor Wall { texture dat/nmsu/textures/floor.gif }
	ceiling Wall { texture dat/textures/ceiling.gif }
	obstacles [
		Ramp {
			coords [70.76, 0, 32.5]
			color pink
			texture dat/nmsu/texsmall/blue_tile.gif
			type 3
			width 3.1
			height 1
			length 13.2
			numsteps 5
		}
		Ramp {
			coords [77.2, 0, 29.4]
			color pink
			texture dat/nmsu/texsmall/blue_tile.gif
			type 3
			width 6.2
			height 1
			length 7.3
			numsteps 5
		}
		Ramp {
			coords [74.3, 0, 27.5]
			color pink
			texture dat/nmsu/texsmall/blue_tile.gif
			type 4
			width 2.8
			height 1
			length 4.17
			numsteps 5
		}
		Ramp {
			coords [74.8, 0, 27.3]
			color pink
			texture dat/nmsu/texsmall/blue_tile.gif
			type 1
			width 2
			height 1
			length 4
			numsteps 5
		}
	]
...

Lecture 41

Some Other Virtual Environments

Googling "open source" 3D "virtual environment" gets you links to various projects and papers. Some filtering is required.

Read These Tips from GDC

Link of Interest: Optimization

Lecture 42

Link of Interest: Finishing a Game

Steed Ch. 13

How to Reduce Costs of Virtual Environment Creation?

Brainstorm with me a bit on this one. If your task were to build a LOTRO-like rendition of Moscow ID, what would be the primary costs, and how might we seek to reduce them?

  • Build a real place; extracting model from reality could be made easier than generating model from nothingness. Existing large real world models may be source-able. LIDAR and similar 3D scanning technology might become cheapish.
  • Consider VistaPro. Procedurally generating a model from nothingness might not be that hard if you have a good algorithm.
  • Use an existing graphics engine; building one from scratch is too much programming work.
  • The world (models, art), as hard as they are, would be this hollow shell if it is not accompanied by meaningful objects, things you can interact with, NPC's, things to do, resources and things you can make.





    coding cost
    use the highest-level language that can deliver adequate graphics, networking, and compute performance. But coding cost is the minority.
    modeling cost
    where can we import existing models instead of creating our own? can we get to where this "just works" for a vast preponderance of models?
    texture-acquisition cost
    can a library of standard textures deliver 80+% of our texture needs?
    sound-acquisition
    can a library of standard sounds deliver 50+% of our audio needs?

    Lecture 43

    Selected Bits from the Remainder of Di Guiseppe

    Export to FBX

    Fbx-Conv

    I have a preference for G3DJ over G3DB. JSON is good. People sometimes tweak things in their G3DJ. I forget if there's a trivial G3DJ-to-G3DB but their should be, only used for final shipped game binaries.

    Importing a 3D Model

    We can look for how Di Guiseppe does it or we can look at how turning the pages does it.

    Reducing the Cost of Constructing a 3D World

    The Hong Kong Floor Plan Efforts

    For many virtual environments, you might find the content creator expects you to start from good old Dungeons and Dragons maps, or the bones of old architecture floor plans. We started CVE from such drawings, and then found a bunch of related work.

    Building 3D Models/Worlds From 2D Digital Images

    In or after a previous class, a student mentioned that one can just use Google Foo, or at least that Dr. Ma has done so, to generate 3D models; no fancy LIDAR might be required. Turns out there are lots of tools: Main issue would be: do they generate/export a format that your software can import/use? Another issue would be: many of the available tools are expensive commercial offerings.

    Readings on Intelligent NPC's

    BDI

    Lecture 44

    Welcome to Dr. J's "Last Lecture" (at UI)

    Now for some Highlights from Di Giuseppe Chapter 6

    Should Di Giuseppe be recommending DirectionalShadowLight.java?

    Visual Effect for Weapons Fire and/or Enemy Death Effect

    What should weapons fire look like?
    Maybe easiest to have it actually be a (short lived) entity.
    What should death look like?
    For Di Guiseppe the answer is: soda bubbles and a fadeout. I suppose some weapons might vaporize a target, but this is rare.
    In the Java code:

    Performance Tips (Di Guiseppe)

    Reading Giuseppe's code

    For posterity: exactly what did you have to do to get book code running?

    Procedurally-Generated Height Fields

    Impetus: building "outside" and/or other uneven areas.

    Reading

    First, photos:

    Discussion of sloping street outside JEB
    Room {
       NAME  blahblah
       ...
       obstacles [
          ...
          HeightField {
    	 coords [26 , 12.3, 18.9]
    	 tex grass.png
    	 width 2
    	 length 2
    	 heights [ [0, 0, 0], [0, 0.5, 0], [0, 0, 0] ]
    	 }
          ]
       }
    
    Discussion of HeightField.icn:
    #
    # A HeightField is a non-flat piece of terrain, analogous to a Ramp.
    # It has a "world-coordinate embedding", within which it plots a grid
    # of varying heights, rendered using a regular mesh of triangles.
    # See [Rabin2010], Chapter 4.2, Figure 4.2.11. Compared with the Rabin
    # examples, we use a particular "alternating diagonal" layout:
    #
    # V-V-V
    # |\|/|
    # V-V-V
    # |/|\|
    # V-V-V
    #
    # Call the rectangular surface regions between adjacent vertices "cells".
    # Let HF be a list of list of heights. There are in fact *HF-1 cell rows,
    # and *(HF[1])-1 cell columns. In the above example *HF=3, *(HF[1])=3, and
    # although the heightfield matrix is a 3x3, the cell matrix is 2x2. The
    # cell row length is length/(*HF-1) and the
    # cell column width is width/(*(HF[1])-1).
    #
    # Vertex Vij, where i is the row and j is the column, starting at 0, is given
    # by x+(i*cell_column_width),y+HF[i+1][j+1],z+(j*cell_row_length).
    #
    class HeightField : Obstacle(
       x, y, z,		# base position
       width, length,	# x- and z- extents
       HF			# list of lists of y-offsets.
       rows, columns, row_length, column_width # derived/internal
       )
    
       #
       # assumes 0-based subscripts
       #
       method calc_vertex(i,j)
          return [x+i*column_width, y+HF[i+1][j+1], z+(j*row_length)]
       end
    
    method render(render_level)
       every row := 2 to *HF do {
          every col := 2 to *(HF[1]) do {
    	 v1 := calc_vertex(col-1,row-1)
    	 v2 := calc_vertex(col-1,row)
    	 v3 := calc_vertex(col,row)
    	 v4 := calc_vertex(col,row-1)
    	 # there are two cases, triangle faces forward (cell row+col even)
    	 # and triangle faces backward (cell row+col odd)
    	 if row+col %2 = 0 then {
    	    # render a triangle facing the previous column
    	    FillPolygon(v1 ||| v2 ||| v3)
    	    FillPolygon(v1 ||| v3 ||| v4)
    	    }
    	 else {
    	    # render a triangle facing the next column
    	    FillPolygon(v1 ||| v2 ||| v4)
    	    FillPolygon(v2 ||| v3 ||| v4)
    	   }
           }
       }
    end
    initially(coords,w,h,l,tex)
        HF := list(3)
        every !HF := list(3, 0.0)
        HF[2,2] := 0.5
    
        rows := *HF-1, columns := *(HF[1])-1
        row_length := length/rows, column_width := width/columns
    end
    
    And: discussion of procedural generating of a heightfield
    procedure main()
       every z := 0 to 5 do {
          every x := 0 to 20 do {
    	 writes(trun(hf(x,z)), " ")
          }
          write()
       }
    end
    
    procedure hf(x:real,z:real)
       return (log(x+1,2) + z/5.0 * (1-x/20)) * 1.5 / 4.39
    end
    
    procedure trun(r)
       return left(string(r) ? tab((find(".")+3)|0), 5)
    end
    

    A (Temporary?) Network Focus

    I am going to focus attention in class on the networking aspects of what we are doing, at least until we get a better handle on it. We've spent only modest time on networking so far.

    Hello, cveworld

    We need a Java client that gets through the initial login/handshake for cve. We apparently need to know the cve cypher. And, we probably need to get it working with non-blocking I/O.

    cve cypher

    Located in utilities.icn, the cve cypher is Very Simple. For each character of a string: encode it by adding 62, and concatenate a random character. To decode, extract every odd character and subtract 62. For example, a cypher of the string "system" is
    s (115) + 62 = 177, hex B1
    y (121) + 62 = 183, hex B7
    s (115) + 62 = 177, hex B1
    t (116) + 62 = 178, hex B2
    e (101) + 62 = 163, hex A3
    m (109) + 62 = 171, hex AB
    
    Because the CVE session transcript was printing out strings using Unicon's "image" function, which prints upper ascii characters in hex format, this string in the transcript looked approximately like "\xb1\x00\xb7\x00\xb1\x00\xb2\x00\xa3\x00\xab\x00" (i.e. it was the "secret" login name used to do a system login and create a new user account).

    Implementing cypher(s) in C would be trivial, is it as trivial in Java? (yes, see cypher.java).

    Next steps towards Hello, cveworld

    OK, so some of your HW#4 chat programs implemented much of the CVE login protocol. Given the cypher, what will it take for us to make one that actually logs in on cveworld.com?

    dat/ directory organization

    If you were using the CVE server, your level designs would go in here. There are some directories with static information replicated on all clients, and some directories with dynamic information, possibly stored mainly on the server.
    cve/doc                - project documentation source; several subdirectories
    cve/dat
    cve/dat/3dmodels/      - basic support for models in .s3d and .x format
    cve/dat/help           - command reference and user guide PDF
    cve/dat/images         - non-textures such as logos
    cve/dat/images/letters - textures images for alphanumeric in-game text
    cve/dat/newsfeed       - in-game forum-like asynchronous text boards
    cve/dat/nmsu           - NMSU Science Hall level; many subdirs
    cve/dat/projects/      - in-game software project spaces
    cve/dat/scratch        - scratch space
    cve/dat/sessions       - in-game collaborative IDE sessions
    cve/dat/textures       - common textures that may be used in all levels
    cve/dat/uidaho/        - UIdaho Janssen Engineering Building
    cve/dat/users          - user accounts not tied to a particular server/level
    

    .avt files

    #@ Avatar property file generated by amaker.icn
    #@ on:	07:17:49 MST 2018/03/22
    NAME=spock
    GENDER=m
    HEIGHT=0.6
    XSIZE=0.7
    YSIZE=0.7
    ZSIZE=0.7
    SKIN COLOR=white
    SHIRT COLOR=white
    PANTS COLOR=white
    SHOES COLOR=white
    HEAD SHAPE=1
    SHAPE=human
    FACE PICTURE=spock.gif
    Privacy=Everyone
    

    The JEB demos

    The JEB (Janssen Engineering Building) demos give us a gentle introduction to the internals of the CVE virtual environment, from a data-centric viewpoint: how do data files describe the virtual world? how would 3D models of virtual objects integrate into this virtual world?
    jeb1 and jeb1.zip
    This introduction to the CVE code shows you the first 2500 lines of (Unicon) CVE code pertaining to the 3D graphics. The network-centric multi-user client/server code is omitted for the same of exposition and would need to be discussed later after understanding the basic "virtual world" object model introduced here.
    jeb2 and jeb2.zip
    This second-level of CVE detail consists of around ~6100 lines of code. It includes a (rather deplorable, but fairly "programmable", Lego-man) avatar class (avatar.icn) along with a contrasting, crude ancestor of the 3D model file format support libraries (s3dparse.icn for S3D files), and a simple standalone tool for debugging one's room modeling (modview.icn)

    Comparing JEB demo models with the Hong Kong folks'

    It seems in building virtual versions of the NMSU CS department and the Uidaho CS department, we had some of the same problems as some Hong Kong researchers generating 3D models from Floor Plans.

    Raw Data

    A virtual space starts with measurements and images. If we had CAD files for JEB that would be swell, but as far as I know, we don't. We have crude floor plans. We need to extract (x,y,z) coordinates for those portions of the building we wish to model, sufficient to make a "wire frame" model. We need to create images depicting the surfaces of all the polygons in that model, the images are called textures and have certain special properties.

    There is one other kind of raw data I'd like you to collect: yourselves. I want to push beyond the crude avatars I've used previously, and model ourselves in crude, low-polygon textured glory.

    On Coordinate Systems

    The coordinate system is the first piece of 3D graphics we are learning. It is very simple and pretty much follows OpenGL conventions. x is east-west, y is up-down, and z is north-south. Positive runs east, up, and south.

    To anyone who gets confused about a positive axis running from right to left or from top to bottom instead of what you were expecting: this just means that your perspective is turned around from those used by the world coordinates. If your character rotates 180 degrees appropriately, suddenly positive values go the opposite direction from before and what was right to left is the more familiar left to right. The point: world coordinates are different from your personal eyeball coordinate system, don't confuse them.

    A Common Coordinate System

    For this class, we will use a standard/common coordinate system: 1.0 units = 1 meter, with an origin (0.0,0.0,0.0) in the northwest corner at ground level. Y grows "up", X grows east, and Z grows south. The entire building (except anything below ground level as viewed from the northwest corner) will have fairly small positive real numbers in the model. This coordinate system is referred to as FHN world coordinates (FHN=Frank Harary Normal). Frank Harary was a graph theorist friend I knew in New Mexico. The coordinate system is named after him because (0,0,0) was at one time the corner of his office.

    Room Modeling

    For simplicity's sake, a room will consist of one or more rectangular areas, each bounded by floor, ceiling, walls, and doors or openings into other rectangular areas. Fortunately or unfortunately for you, we will use the term Room to denote these rectangular areas. Within each room are 0 or more obstacles and decorations. Obstacles are things like tables and chairs, computers and printers. Decorations are things like signs and posters that do not affect movement.

    For your room, we need to:

    Another Sample Room

    Taken from NMSU's virtual CS department we have the following. It is for a ground floor room (y's are 0). This example has both obstacles and decorations.
    Room {
    name SH 167
    x 29.2
    y 0
    z 0.2
    w 6
    h 3.05
    l 3.7
    floor Wall {
    texture floor2.gif
    coords [29.2,0,0.2, 29.2,0,3.9, 35.2,0,3.9, 35.2,0,0.2]
    }
    obstacles [
       Box { # column
          Wall {coords [34.3,0,0.2, 34.3,3.05,0.2, 34.3,3.05,0.6, 34.3,0,0.6]}
          Wall {coords [34.3,0,0.6, 34.0,0,0.6, 34.0,3.05,0.6, 34.3,3.05,0.6]}
          Wall {coords [34.0,0,0.6, 34.0,3.05,0.6, 34.0,3.05,0.2, 34.0,0,0.2]}
          }
       Box { # window sill
          Wall {coords [29.2,0,0.22, 29.2,1.0,0.22, 35.2,1.0,0.22, 35.2,0,0.22]}
          Wall {coords [29.2,1,0.22, 29.2,1.0,0.2,  35.2,1.0,0.2, 35.2,1,0.22]}
          }
       Chair {
              coords [31.2,0,1.4]
       	  position 0
    	  color red
    	  type office
              movable true         
             }
       Table {
             coords [31.4,0,2.4]
       	 position 180
    	 color very dark brown
    	 type  office
           }
       ]
    decorations [
       Wall { # please window
          texture wall2.gif
          coords [29.2,1.0,0.22, 29.2,3.2,0.22, 35.2,3.2,0.22, 35.2,1.0,0.22]
          }
       Wall { # whiteboard
          texture whiteboard.gif
          coords [29.3,1.0,3.7, 29.3,2.5,3.7, 29.3,2.5,0.4, 29.3,1.0,0.4]
          }
       Windowblinds {
               coords [29.2,1.5,0.6]
               angle 90
               crod  blue
               cblinds  very dark purplish brown 
               height 3.05
               width  6
       }
    ]
    }
    

    Diving into Jeb1

    File Organization Comes First. It appears we have two source files, and a few images (.gif) and model (.dat) files.
    jeb1: jeb1.icn model.u
    	unicon jeb1 model.u
    
    model.u: model.icn
    	unicon -c model
    
    jeb1.zip: jeb1.icn model.icn
    	zip jeb1.zip jeb1.icn model.icn *.gif *.dat makefile README
    

    jeb1.icn

    A Walk Through a Smattering of Jeb1

    Most attributes can be changed afterwards using function WAttrib(), which takes as many attributes as you like. The following line enables texture mapping in the window:
        WAttrib("texmode=on")
    

    Assigning a value to a variable uses := in Unicon. Most of the rest of the numeric operators and computation is the same as in any programming language. In 3D graphics, a lot of real numbers are used. In the jeb1 demo, the user controls a moving camera, which has an (x,y,z) location, an (x,y,z) vector describing where they are looking, relative to their current position, and camera angles up or down. Initial values of posx and posy are "in the middle of the first room in the model". The rooms have min and max values for x,y, and z so:

    	       posx := (r.minx + r.maxx) / 2
    	       posy := r.miny + 1.9
    	       posz := (r.minz + r.maxz) / 2
    	       lookx := posx; looky := posy -0.15; lookz := 0.0
    

    Unicon has a "list" data type for storing an ordered collection of items. There is a global variable named Room which holds such a list, a list of Room() objects. We will discuss Room() objects in a bit. This is how the list of rooms is created, with 0 elements:

        Rooms := [ ]
    

    The code that actually reads a model file and creates Room() objects and puts them on the Rooms list is procedure make_model(). We defer the actual parsing discussion to later or elsewhere.

    procedure make_model(corridor)
    local fin, s, r
       fin := open(modelfile) | stop("can't open model.dat")
       while s := readlin(fin) do s ? {
           if ="#" then next
           else if ="Room" then {
    	   r := parseroom(s, fin)
    	   put(world.Rooms, r)
    	   world.RoomsTable[r.name] := r
    	   if /posx then {
    	       # ... no posx defined, calculate posx/posy per earlier code
    	   }
           }
           else if ="Door" then parsedoor(s,fin)
           else if ="Opening" then parseopening(s,fin)
           # else: we didn't know what to do with it, maybe its an error!
       }
       close(fin)
    end
    
    The "rooms" in jeb.dat are JEB230, JEB 228 and the corridor immediately outside. Each room is created by a constructor procedure, and inserted into both a list and a table for convenience access. (Do we need the list? Maybe not! Tables are civilization!)

    The following line steps through all the elements of the Rooms list, and tells each Room() to draw itself. The exclamation point is an operator that generates each element from the list if the surrounding expression requires it. The "every" control structure requires every result the expression can produce. The .render() calls a method render() on an object (in this case, on each object in turn as it is produced by !). Note that our CVE will probably want to get smart about only drawing those rooms that are "visible", in order to scale performance.

        every (!Rooms).render()
    

    Aspects of the .dat file format

    Obstacles

    Student question for the day: my room is not a perfect rectangle, it has an extra column jutting out on one of the walls, what do I do?

    Answer: in HW4 such a thing might be omitted, but you might get around to including the column as an obstacle (a virtual Box) in your .dat file. The obstacles section is also where things like bookshelves and tables might go.

    obstacles [
       Box { # column
          Wall {coords [34.3,0,0.2, 34.3,3.05,0.2, 34.3,3.05,0.6, 34.3,0,0.6]}
          Wall {coords [34.3,0,0.6, 34.0,0,0.6, 34.0,3.05,0.6, 34.3,3.05,0.6]}
          Wall {coords [34.0,0,0.6, 34.0,3.05,0.6, 34.0,3.05,0.2, 34.0,0,0.2]}
          }
    ]
    

    Ceilings?

    Originally there was no special ceiling syntax; before now a person had to put a decoration up there in order to change the ceiling. However, ceilings are very much like floors, so I went into model.icn procedure parseroom() and added the following after the floor code. Hopefully it is there now in the public version.

    The parser is a handwritten in a vaguely recursive descent style; at syntax levels where many fields can be read, it builds a table (so order does not matter) from which we populate an object's fields. So the table t's fields correspond to what the file had in it, and the .coords here is the list of vertices which the object in the model wants.

       if \ (t["ceiling"]) then {
          t["ceiling"].coords := [t["x"],t["y"]+t["h"],t["z"],
                    t["x"],t["y"]+t["h"],t["z"]+t["l"],
                    t["x"]+t["w"],t["y"]+t["h"],t["z"]+t["l"],
                    t["x"]+t["w"],t["y"]+t["h"],t["z"]]
          t["ceiling"].set_plane()
          r.ceiling := t["ceiling"]
          }
    
    This allows us to say declarations like this in .dat files:
    ceiling Wall {
    texture jeb230calendar.gif
    }
    

    Event Handling

    Most programs that open a window or use a graphical user interface are event-driven meaning that the program's main job is to sit around listening for user key and mouse clicks, interpreting them as instructions, and carrying them out. Pending() returns a list of events waiting to be processed. Event() actually returns the key or mouse event. For a simple demo program, one could code the event processing loop oneself, something like the following.
       repeat {
          if *Pending() = 0 then { # continue in current direction, if any }
          else {
             case ev := Event() of {
    	    Key_Up:    cam_move(xdelta := 0.05)		# Move Foward
                ... other keys
                }
             }
    

    Events may be strings (for keyboard characters), but most are small negative integer codes, with symbolic names such as Key_Up defined in keysyms.icn.

    $include "keysyms.icn"
    
    The Jeb1 demo isn't this simple, since it embeds the 3D window in a Unicon GUI interface. Events will be discussed in more detail below; for now it is enough to say that they just modify the camera location and tell the scene to redraw itself. cam_move() checks for a collision and if not, it updates the global variables (e.g. posx,posy,posz). After the cam_move(), function Eye(x,y,z,lx,ly,lz) sets the camera position and look direction. jeb1 is a Unicon GUI application. The GUI owns the control flow and calls a procedure when an interesting event happens. In Unicon terminology, a Dispatcher runs the following loop until the program exits. The key call is select(), which tells is which input sources have an event(s) for us.
       method message_loop(r)
       local L, dialogwins, x
          connections := []
          dialogwins := set()
          every insert(dialogwins, (!dialogs).win)
          every put(connections, !dialogwins | !subwins | !nets)
          while \r.is_open do {
    	 if x := select(connections,1)[1] then {
                if member(subwins, x) then {
    	       &window := x
    	       do_cve_event()
    	       }
                else if member(dialogwins, x) then do_event()
                else if member(nets, x) then do_net(x)
    	    else write("unknown selector ", image(x))
    
    	    # do at least one step per select() for smoother animation
    	    do_nullstep()
    	    }
             else do_validate() | do_ticker() | do_nullstep() | delay(idle_sleep)
    	 }
       end
    
    do_event() calls the normal Unicon GUI callbacks for the menus, textboxes, etc. do_cve_event() is a GUI handler for keys in the 3D subwindow.
       method do_cve_event()
       local ev, dor, dist, closest_door, closest_dist, L := Pending()
          case ev := Event() of {
    	 Key_Up: {
    	    xdelta := 0.05
    	    while L[1]===Key_Up_Release & L[4]===Key_Up do {
    	       Event(); Event(); xdelta +:= 0.05
    	       }
    	    cam_move(xdelta)		# Move Foward
    	    }
    	 Key_Down: {
    	    xdelta := -0.05
    	    while L[1]===Key_Down_Release & L[4]===Key_Down do {
    	       Event(); Event(); xdelta -:= 0.05
    	       }
    	    cam_move(xdelta)	# Move Backward
    	    }
    	 Key_Left: {
    	    ydelta := -0.05
    	    while L[1]===Key_Left_Release & L[4]===Key_Left do {
    	       Event(); Event(); ydelta -:= 0.05
    	       }
    	    cam_orient_yaxis(ydelta) # Turn Left
    	    }
    	 Key_Right: {
    	    ydelta := 0.05
    	    while L[1]=== Key_Right_Release & L[4] === Key_Right do {
    	       Event(); Event(); ydelta +:= 0.05
    	       }
    	    cam_orient_yaxis(ydelta)	 # Turn_Right
    	    }
    	 "w":       looky +:= (lookdelta := 0.05)  #Look Up
    	 "s":       looky +:= (lookdelta := -0.05) #Look Down
    	 "q": exit(0)
    	 "d": {
    	    closest_door := &null
    	    closest_dist := &null
    	    every (dor := !(world.curr_room.exits)) do {
    	       if not find("Door", type(dor)) then next
    	       dist := sqrt((posx-dor.x)^2+(posz-dor.z)^2)
    	       if /closest_door | (dist < closest_dist) then {
    		  closest_door := dor; closest_dist := dist
    	          }
    	       }
    	    if \closest_door then {
    	       if \ (closest_door.delt) === 0 then {
    	          closest_door.start_opening()
    	          }
    	       else closest_door.done_opening()
    	       closest_door.delta()
    	       }
    	    }
    	 -166 | -168 | (-(Key_Up|Key_Down) - 128) :    xdelta := 0
    	 -165 | -167 | (-(Key_Left|Key_Right) - 128) : ydelta := 0
    	 -215 | -211 : 	lookdelta := 0
            }
    
          Eye(posx,posy,posz,lookx,looky,lookz)
    
       end
    

    The Line Between jeb1.icn and model.icn

    This program is a rapid prototype to test a concept. Originally it was a single file (a single procedure!), but after the initial demo (by Korrey Jacobs, adapted by Ray Lara, both at NMSU) proved the concept, Dr. J started reorganizing it into two categories, the code providing the underlying modeling capabilities (model.icn) and the code providing the user interface (jeb1.icn). The dividing line is imperfect, we might want to move some code from one file into the other.

    model.icn

    We may as well start with the larger of the two source files. model.icn is intended to be usable for any cve, not just the UI CS department CVE. It defines classes Door, Wall, Box, and Room, where Room is a subclass of Box.

    Wall() is the simplest class here, it is just a textured polygon, holding a texture value and a list of x,y,z coordinates, and providing a method render(). Every object in the CVE's model will provide a method render().

    class Wall(texture, coords)
       method render()
          if current_texture ~=== texture then {
             WAttrib("texture="||texture, "texcoord=0,0,0,1,1,1,1,0")
             current_texture := texture
             }
          (FillPolygon ! coords) |  write("FillPolygon fails")
       end
    initially(t, c[])
      texture := t
      coords := c
    end
    

    Class Box() is more interesting, it is a rectangular area with walls that one cannot walk through, and a bounding box for collision detection. Doors, openings, and other exceptions are special-cased by subclassing and overriding default behavior. Rectangular areas are singled out because they are common and have easy collision detection; when a wall goes from floor to ceiling, collision detection reduces to a 2D problem.

    Box() has methods:

    Class Door() is not just a graphical object, it is a connection between (2) rooms, which can be open (1.0) or closed(0.0) or in between. It supports methods:

    The full code of jeb1.icn

    import gui
    $include "guih.icn"
    
    class Untitled : Dialog(chat_input, chat_output, text_field_1, subwin)
       method component_setup()
          self.setup()
       end
    
       method end_dialog()
       end
    
       method init_dialog()
       end
    
       method on_exit(ev)
          write("goodbye")
          exit(0)
       end
    
       method on_br(ev)
       end
    
       method on_kp(ev)
       end
    
       method on_mr(ev)
       end
    
       method on_subwin(ev)
          write("subwin")
       end
    
       method on_about(ev)
       local sav
          sav := &window
          &window  := &null
          Notice("jeb1 - a 3D demo by Jeffery")
          &window := sav
       end
    
       method on_chat(ev)
          chat_output.set_contents(put(chat_output.get_contents(), chat_input.get_contents()))
          chat_output.set_selections([*(chat_output.get_contents())])
          chat_input.set_contents("")
       end
    
       method setup()
          local exit_menu_item, image_1, menu_1, menu_2, menu_bar_1, overlay_item_1, overlay_set_1, text_menu_item_2
          self.set_attribs("size=800,750", "bg=light gray", "label=jeb1 demo")
          menu_bar_1 := MenuBar()
          menu_bar_1.set_pos("0", "0")
          menu_bar_1.set_attribs("bg=very light green", "font=serif,bold,16")
          menu_1 := Menu()
          menu_1.set_label("File")
          exit_menu_item := TextMenuItem()
          exit_menu_item.set_label("Exit")
          exit_menu_item.connect(self, "on_exit", ACTION_EVENT)
          menu_1.add(exit_menu_item)
          menu_bar_1.add(menu_1)
          menu_2 := Menu()
          menu_2.set_label("Help")
          text_menu_item_2 := TextMenuItem()
          text_menu_item_2.set_label("About")
          text_menu_item_2.connect(self, "on_about", ACTION_EVENT)
          menu_2.add(text_menu_item_2)
          menu_bar_1.add(menu_2)
          self.add(menu_bar_1)
          overlay_set_1 := OverlaySet()
          overlay_set_1.set_pos(6, 192)
          overlay_set_1.set_size(780, 558)
          overlay_item_1 := OverlayItem()
          overlay_set_1.add(overlay_item_1)
          overlay_set_1.set_which_one(overlay_item_1)
          self.add(overlay_set_1)
          subwin := Subwindow3D()
          subwin.set_pos(14, 195)
          subwin.set_size("767", "551")
          subwin.connect(self, "on_subwin", ACTION_EVENT)
          subwin.connect(self, "on_br", BUTTON_RELEASE_EVENT)
          subwin.connect(self, "on_mr", MOUSE_RELEASE_EVENT)
          subwin.connect(self, "on_kp", KEY_PRESS_EVENT)
          self.add(subwin)
          chat_input := TextField()
          chat_input.set_pos("12", "162")
          chat_input.set_size("769", "25")
          chat_input.set_draw_border()
          chat_input.set_attribs("bg=very light green")
          chat_input.connect(self, "on_chat", ACTION_EVENT)
          chat_input.set_contents("")
          self.add(chat_input)
          chat_output := TextList()
          chat_output.set_pos("10", "29")
          chat_output.set_size("669", "127")
          chat_output.set_draw_border()
          chat_output.set_attribs("bg=very pale whitish yellow")
          chat_output.set_contents([""])
          self.add(chat_output)
          image_1 := Image()
          image_1.set_pos("686", "31")
          image_1.set_size("106", "120")
          image_1.set_filename("nmsulogo.gif")
          image_1.set_internal_alignment("c", "c")
          image_1.set_scale_up()
          self.add(image_1)
       end
    
       initially
          self.Dialog.initially()
    end
    
    #
    # N3Dispatcher is a custom dispatcher.  Currently it knows about 3D
    # subwindows but we will extend it for networked 3D applications.
    #
    class N3Dispatcher : Dispatcher(subwins, nets, connections)
       method add_subwin(sw)
          insert(subwins, sw)
       end
       method do_net(x)
          write("do net ", image(x))
       end
    
       method do_nullstep()
       local moved, dor
    
       thistimeofday := gettimeofday()
       thistimeofday := thistimeofday.sec * 1000 + thistimeofday.usec / 1000
       if (delta := thistimeofday - \lasttimeofday) < 17 then {
          delay(17 - delta)
          }
       lasttimeofday := thistimeofday
    
          if xdelta ~= 0 then {
    	 cam_move(xdelta)
    	 moved := 1
    	 }
          if ydelta ~= 0 then {
    	 cam_orient_yaxis(ydelta)
    	 moved := 1
    	 }
          if lookdelta ~= 0 then {
    	 looky +:= lookdelta; moved := 1
    	 }
          every (\((dor := !(world.curr_room.exits)).delt)) ~=== 0 do {
    	 if dor.delta() then moved := 1
             else dor.done_opening()
    	 }
          if \moved then {
    	 Eye(posx,posy,posz,lookx,looky,lookz)
             return
    	 }
       end
    
    method cam_move(dir)
    local deltax := dir * cam_lx, deltaz := dir * cam_lz
    
       if world.curr_room.disallows(posx+deltax,posz+deltaz) then {
          deltax := 0
          if world.curr_room.disallows(posx+deltax,posz+deltaz) then {
    	 deltaz := 0; deltax := dir*cam_lx
    	 if world.curr_room.disallows(posx+deltax,posz+deltaz) then {
    	    fail
    	    }
    	 }
          }
    
       #calculate new position
       posx +:= deltax
       posz +:= deltaz
    
       #update look at spot
       lookx := posx + cam_lx
       lookz := posz + cam_lz
    end
    
    #
    # Orient the camera
    #
    method cam_orient_yaxis(turn)
    
       #update camera angle
       cam_angle +:= turn
    
       if abs(cam_angle) > 2 * &pi then
          cam_angle := 0.0
    
       cam_lx := sin(cam_angle)
       cam_lz := -cos(cam_angle)
    
       lookx := posx + cam_lx
       lookz := posz + cam_lz
    end
    
       global lasttimeofday
       #
       # Execute one event worth of motion and update the camera
       #
       method do_cve_event()
       local ev, dor, dist, closest_door, closest_dist, L := Pending()
          case ev := Event() of {
    	 Key_Up: {
    	    xdelta := 0.05
    	    while L[1]===Key_Up_Release & L[4]===Key_Up do {
    	       Event(); Event(); xdelta +:= 0.05
    	       }
    	    cam_move(xdelta)		# Move Foward
    	    }
    	 Key_Down: {
    	    xdelta := -0.05
    	    while L[1]===Key_Down_Release & L[4]===Key_Down do {
    	       Event(); Event(); xdelta -:= 0.05
    	       }
    	    cam_move(xdelta)	# Move Backward
    	    }
    	 Key_Left: {
    	    ydelta := -0.05
    	    while L[1]===Key_Left_Release & L[4]===Key_Left do {
    	       Event(); Event(); ydelta -:= 0.05
    	       }
    	    cam_orient_yaxis(ydelta) # Turn Left
    	    }
    	 Key_Right: {
    	    ydelta := 0.05
    	    while L[1]=== Key_Right_Release & L[4] === Key_Right do {
    	       Event(); Event(); ydelta +:= 0.05
    	       }
    	    cam_orient_yaxis(ydelta)	 # Turn_Right
    	    }
    	 Key_PgUp |
    	 "w":       looky +:= (lookdelta := 0.05)  #Look Up
    	 Key_PgDn |
    	 "s":       looky +:= (lookdelta := -0.05) #Look Down
    	 "q": exit(0)
    	 "d": {
    	    closest_door := &null
    	    closest_dist := &null
    	    every (dor := !(world.curr_room.exits)) do {
    	       if not find("Door", type(dor)) then next
    	       dist := sqrt((posx-dor.x)^2+(posz-dor.z)^2)
    	       if /closest_door | (dist < closest_dist) then {
    		  closest_door := dor; closest_dist := dist
    	          }
    	       }
    	    if \closest_door then {
    	       if \ (closest_door.delt) === 0 then {
    	          closest_door.start_opening()
    	          }
    	       else closest_door.done_opening()
    	       closest_door.delta()
    	       }
    	    }
    	 -166 | -168 | (-(Key_Up|Key_Down) - 128) :    xdelta := 0
    	 -165 | -167 | (-(Key_Left|Key_Right) - 128) : ydelta := 0
    	 -215 | -211 | (-(Key_PgUp|Key_PgDn)-128): 	lookdelta := 0
            }
    
          Eye(posx,posy,posz,lookx,looky,lookz)
    
       end
    
       method message_loop(r)
       local L, dialogwins, x
          connections := []
          dialogwins := set()
          every insert(dialogwins, (!dialogs).win)
          every put(connections, !dialogwins | !subwins | !nets)
          while \r.is_open do {
    	 if x := select(connections,1)[1] then {
                if member(subwins, x) then {
    	       &window := x
    	       do_cve_event()
    	       }
                else if member(dialogwins, x) then do_event()
                else if member(nets, x) then do_net(x)
    	    else write("unknown selector ", image(x))
    
    	    # do at least one step per select() for smoother animation
    	    do_nullstep()
    	    }
             else do_validate() | do_ticker() | do_nullstep() | delay(idle_sleep)
    	 }
       end
    initially
          subwins := set()
          nets := set()
          dialogs := set()
          tickers := set()
          idle_sleep_min := 10
          idle_sleep_max := 50
          compute_idle_sleep()
    end
    
    class Subwindow3D : Component ()
       method resize()
         compute_absolutes()
         # WAttrib(cwin, "size="||w||","||h)
       end
       method display()
       initial please(cwin)
        Refresh(cwin)
       end
       method init()
          if /self.parent then
             fatal("incorrect ancestry (parent null)")
          self.parent_dialog := self.parent.get_parent_dialog_reference()
    
          self.cwin := (Clone ! ([self.parent.get_cwin_reference(), "gl",
    			      "size="||w_spec||","||h_spec,
    			      "pos=14,195", "inputmask=mck"] |||
    			     self.attribs)) | stop("can't open 3D win")
          self.cbwin := (Clone ! ([self.parent.get_cbwin_reference(), "gl",
    			       "size="||w_spec||","||h_spec,
    			       "pos=14,195"] |||
    			      self.attribs))
          set_accepts_focus()
          dispatcher.add_subwin(self.cwin)
       end
    end
    
    # link "world"
    
    global modelfile
    
    procedure main(argv)
       local d
       modelfile := argv[1] | stop("uasge: jeb1 modelfile")
       world := FakeWorld()
       #
       # overwrite the system dispatcher with one that knows about subwindows
       #
       gui::dispatcher := N3Dispatcher()
       d := Untitled()
       d.show_modal()
    end
    
    
    link model
    global world
    
    
    procedure make_model(cooridoor)
    local fin, s, r
       fin := open(modelfile) | stop("can't open ", image(modelfile))
       while s := readlin(fin) do s ? {
           if ="#" then next
           else if ="Room" then {
    	   r := parseroom(s, fin)
    	   put(world.Rooms, r)
    	   world.RoomsTable[r.name] := r
    	   if /posx then {
    	       r.calc_boundbox()
    	       posx := (r.minx + r.maxx) / 2
    	       posy := r.miny + 1.9
    	       posz := (r.minz + r.maxz) / 2
    	       lookx := posx; looky := posy -0.15; lookz := 0.0
    	   }
           }
           else if ="Door" then parsedoor(s,fin)
           else if ="Opening" then parseopening(s,fin)
    #       else write("didn't know what to do with ", image(s))
       }
       close(fin)
    end
    
    #CONTROLS:
    #up arrow - move foward
    #down arrow - move backward
    #left arrow - rotate camera left
    #right arrow - rotate camera right
    # ' w ' key - look up
    # ' s ' key - look down
    # ' d ' key - toggle door open/closed
    
    #if you get lost in space (may happen once in a while)
    #just restart the program
    
    $include "keysyms.icn"
    
    #GLOBAL variables
    global posx, posy, posz           # current eye x,y,z position
    global lookx, looky, lookz        # current look x position and so on
    global cam_lx, cam_lz, cam_angle  # eye angles for orientation
    global xdelta, ydelta, lookdelta
    
    global Rooms
    
    procedure please(d)
    local r
        &window := d
        WAttrib("texmode=on")
    
        #initialize globals
    #    posx := 32.0; posy := 1.9; posz := 2.0
    #    lookx := 32.0; lookz := 0.0;    looky := 1.75
        cam_lx := cam_angle := 0.0; cam_lz := -1.0
    
        # render graphics
        make_model()
        every r := !world.Rooms do {
    	if not r.disallows(posx, posz) then
    	    world.curr_room := r
        }
        every (!world.Rooms).render(world)
    
       xdelta := ydelta := lookdelta := 0
       dispatcher.cam_move(0.01)
       Eye(posx,posy,posz,lookx,looky,lookz)
        # ready for event processing loop
    end
    
    # fakeworld - minimal nsh-world.icn substitute for demo
    
    record fakeconnection(connID)
    
    class FakeWorld(
    	current_texture, d_wall_tex, connection, curr_room,
    	d_ceil_tex, d_floor_tex, collide, Rooms, RoomsTable
    	)
    method find_texture(s)
        return s
    end
    initially
      Rooms := []
      RoomsTable := table()
      collide := 0.8
      connection := fakeconnection()
      d_floor_tex := "floor.gif"
      d_wall_tex := "walltest.gif"
      d_ceil_tex := d_wall_tex
    end
    
    
    
    ### Ivib-v2 layout ##
    #...blah blah machine-generated comments omitted...
    
    

    model.icn

    model.icn The only part we had time to look at today was the top of class Room:

    Class Room()

    Class Room() is the most important, and is presented in its entirety. From Box we inherit the vertices that bound our rectangular space.
    class Room : Box(floor, # "wall" under our feet
    	   ceiling,     # "wall" over our heads
    	   obstacles,	# list: things that stop movement
    	   decorations, # list: things to look at
    	   exits,	# x-y ways to leave room
    	   name
    	   )
    
    A room disallows a move if: (a) outside or (b) something in the way. The margin of k meters reduces graphical oddities that occur if the eye gets too near what it is looking at. Note that JEB doors are kind of narrow, and that OpenGL's graphical clipping makes it relatively easy to accidentally see through walls.
       method disallows(x,z)
          if /minx then calc_boundbox() 
    
          # regular area is normally OK
          if minx+1.2 <= x <= maxx-1.2 & minz+1.2 <= z <= maxz-1.2 then {
             every o := !obstacles do
                if o.disallows(x,z) then return
             fail
             }
          # outside of regular area OK if an exit allows it
          every e := !exits do {
             if e.allows(x,z) then {
                if minx <= x <= maxx & minz <= z <= maxz then {
                   # allow but don't change room yet
                   }
                else {
                   curr_room := e.other(self) # we moved to the other room
                   }
                fail
                }
             }
          return
       end
    
    Method render() draws an entire room.
       method render()
          every ex := !exits do ex.render()
          WAttrib("texmode=on")
          floor.render()
          ceiling.render()
    
          every (!walls).render()
          every (!obstacles).render()
          every (!decorations).render()
       end
    
    The following add_door method tears a hole in a wall. It needs extending to handle multiple doors in the same wall, and to handle xplane walls. These and many other features may actually be in model.icn; the code example in class is a simplified summary.
       method add_door(d)
          put(exits, d)
          d.add_room(self)
    
          # figure out what wall this door is in, and tear a hole in it,
          # for example, find the wall the please door is in,
          # remove that wall, and replace them with three
    
        every w := !walls do {
           c := w.coords
           if c[1]=c[4]=c[7]=c[10] then {
              if d.x = c[1] then write("door is in xplane wall ", image(w))
              }
           else if c[3]=c[6]=c[9]=c[12] then {
              if abs(d.z - c[3]) < 0.08 then { # door is in a zplane wall
                 # remove this wall
                 while walls[1] ~=== w do put(walls,pop(walls))
                 pop(walls)
                 # replace it with three segments:
                 # w = above, w2 = left, and w3 = right of door
                 w2 := Wall ! ([w.texture] ||| w.coords)
                 w3 := Wall ! ([w.texture] ||| w.coords)
                 every i := 1 to *w.coords by 3 do {
                    w.coords[i+1] <:= d.y+d.height
                    w2.coords[i+1] >:= d.y+d.height
                    w2.coords[i] >:= d.x
                    w3.coords[i+1] >:= d.y+d.height
                    w3.coords[i] <:= d.x + d.width
                    }
                put(walls, w, w2, w3)
                return
                }
             }
           else { write("no plane; giving up"); fail }
           }
       end
    
    Rooms maintain separate lists for obstacles and decorations. Obstacles figure in collision detection.
       method add_obstacle(o)
          put(obstacles, o)
       end
       method add_decoration(d)
          put(decorations, d)
       end
    

    Monster State

    You may have worked some of this out already, or be working on it; I just want to push forward.

    Extending CVE Network Protocol

    It may be painful for server to perform certain computations currently performed by client, such as for each shot, what it hit.
    From client to server
    and from server to other clients
    descriptionFrom server to clientdescription
    \fire dx dy dz target send this every shot.
    coordinates are direction vector.
    target is entity hit according to client
    \damage target amount server sends its assessment of damage.
    response to \fire.
    \weapon userid X change to weapon X where is is one of: spear, pistol, shotgun \death target death is like... notification to de-rez someone's avatar, short of them actually logging out, which would disconnect
    \avatar ... notification to rez someone's avatar. Our respawn command.
    \inform ... suggests it just posts a message to clients' chat boxes; occurs during regular login and might need to occur on respawn
    \avatar raptor11 ...
    \avatar akyl7
    notification to rez someone's avatar. Our respawn command.

    Rabin Slides on Animation (take 2)

    Looked at first few of these, go back into them when we get back to the discussion of rigging and animation.

    Some more Free Web Resources

    2 Pages on Skydome

    2 Pages on Shadows and Lights

    CVE's Start-Stop Daemon and Watchdog

    Steps Toward Making a Functional C-K (FPS) Server

    I am thinking every day we need to figure out things that need doing, and do them. I am thinking I don't login to discord often enough; I am on too many machines, discord is not omnipresent for me. E-mail me.
    1. added hp (hit points). decided to also add maxhp.
    2. added \fire, \damage, \weapon, \obit to cve/src/common/commands.icn. What else do we need?
    3. added placeholders where \fire and \weapon command should be processed in cve/src/server/server.icn. What else do we need?
    4. didn't add anything for \damage yet, it would be sent out from server, in response to a \fire command. Same for \obit.

    Steps Toward Client Network Integration

    1. Modify client to have a usable chat window
    2. Demo Java code that connects to server from inside client
    3. Demo echoing net traffic in chat window
    4. Demo working chat
    5. Demo working mutual 2-avatar rez/derez each other on login/logout
    6. Demo working n-avatars rezzing each other
    7. Demo working \move

    Git

    What all have we learned about Git/libGDX best practices? Things to avoid?

    Cheap Easy Value Adds in the Current Client

    send the server \fire events
    even pre-targeting, the server can do interesting things with it. but targeting will not be hard.
    send server \move events.
    Actually, this ought to be a good way to crash the server, unless/until you switch over from the UIdaho model to the C-K model. But once switched, server will remember where you go, so next time you login you will resume there (if you process the \avatar command to set your x,y,z at least). This means \move can be tested even before you can see other users moving.
    add a checkbox on the login menu for whether to "show password"
    obscure passwords by default. could do this even without checkbox
    avatar appearance (jersey color?) should be persistent across logins
    Its fine to allow edits in-game or each login, but normally it would be associated with account creation. Could add a simple command for that, or use existing CVE .avt file mechanisms.
    add a single-line text input widget to allow transmission of arbitrary commands to the net
    This is very useful for testing/debugging purposes! Eventually it might be retained for in-game text chat.
    Fix snow on ground that is very pixelated
    Normally this would be done by tiling. Do texture coordinates > 1.0 work in libgdx, or do we need to research how to tile/repeat textures?
    add some mob (mob = monster or npc) health indicator
    Player needs to know: are we making progress here, or should we run? There are various ways to do this, e.g. make them slower as they get more wounded.
    At present, no mob feels very big.
    Giganticness is the whole point of some mobs; some are way bigger than elephants. Do they smash through trees, etc?
    Add A little bit of camera shake tied to giant's stomp sounds.
    Such a mood-setting atmospheric would have a big impact.

    NPCs in the CVE Server

    Given that they aren't logged in and getting server attention by sending network messages, where do server-based NPC's fit in the CVE event-driven execution model?

    Additional Considerations for Monsters/NPCs

    Behavior Trees


    Example from Joost's Blog

    Class Project Demos During Scheduled Final Exam Slot

    Who is Watching the Watchdog?

    Implementation of the \fire Command on the Server

    Miss: Hit:

    Weapon Type Switch

    \weapon userid X informs all clients every time a weapon is switched

    Transformed Social Interaction

    Ramifications: Avatar Transformation Transforming the situation Some implications

    Educational Virtual Environments

    Mare Monstrum Paper

    Basic ideas:

    CodeSpells

    Vocabulary Word

    cyranoid

    Tony Downey supplement

    GUIs in Games

    Clarification of the size of worlds

    Parallel Concerns, Moving into Virtual Environments

    We are gonna probably go breadthfirst through these to provide some coverage of all of them, rather than depthfirst to cover some and skip others.

    A "missing link"

    Genre-wise, a missing link between the toy-function JEB Demo and the many-function CVE program, we should be doing a homework that let's us explore a classic genre: the FPS, or First-Person Shooter

    FPS's

    This is the genre that built the 3D PC graphics card industry. There were some 3D games before these, but id Software's early games really established a genre. Note that they were a tiny team, without big backing, and released their early hits as shareware in the 1990's. From the Quake3 source code you can tell that one option for us to FPS it in style is just to learn and understand their 335K LOC engine well enough to modify it to do whatever we want. Software engineers after all believe strongly in code re-use and it is spectacularly generous of Id to release their code.

    Another option is to search for higher (-level) ground, such as filling in some of the gaps between the ~2.5K LOC jeb demo and the FPS genre. The main differences between wandering around the halls of a CS department (the JEB demo) and Wolfenstein 3D or Doom are:

    animated 3D enemy characters
    3D models, simple A/I
    simple combat system
    health meter, attack and damage animations
    a very few virtual objects
    weapons, armor, health kits
    The latter two have been addressed in HW#2 in this class. The animated 3D enemy characters will drive our next couple lectures and homework 5.

    3D Modeling, Part I

    A 3D Model, for the purposes of this class
    A data structure (with an external representation -- a file format) capable of representing an arbitrary-shaped, generally solid object and that object's range of motions.
    Start with: Polygon Mesh and Texture(s)
    Add: bone structure, motion API
    (strangely?) OpenGL does not have built-in 3D models.
    If you dig, you find that SGI did a sweet toolkit atop OpenGL called OpenInventor, but that never became ubiquitous as OpenGL. OpenInventor influenced the design of VRML, which influenced the design of X3D.
    No universal standard for 3D modeling that I know of.
    Suggestions welcome
    3D Model File Format Requirements
    "open". public. platform neutral. easily parsed. preferably human readable. "adequate" performance.
    scripted animation versus program control?
    Sort of a question of how much manual artistry can we afford. Artists make better-looking animation, but are too expensive. Ideal would be: a set of general parametric or programmable animations.
    In this class we will discuss two 3D model formats: S3D and .X
    The S3D (simple 3D)

    S3D File Format Overview

    Check out s3dparse.icn

    You really need to see some sample S3D files in order to get a feel for the beauty of the S3D file format. s3dparse.icn, and S3D files themselves, may have various usually-nonfatal "bugs". There was even a bug in the S3D file format document.

    We also took a look at the desperate situation vis a vis creating 3D models (a job performed by experts with much training) and our need to build such models for our games and virtual environments. Let us continue from there.

    A Grim S3D Example (Part 1)

    Consider the following texture images of a character (you can pretend it is a space marine or pirate to shoot at, if you want).

    Dr. J is 5'9" (1.75m), his elbow-elbow width is approximately 24" (0.61m) and his front-back is 11" (.28m) at the belly.

    (*this is a trick question)

    Dr. J Model 0 - Stuck in a PhoneBooth, Hardwired Code

    To get someone visible within Jeb 1, you could add something like the following right after the rooms are rendered:
        drawavatar( ? (world.Rooms) )
    
    This invokes some hardwired code to render an avatar in a randomly selected room. The procedure to render the digital photos as is, prior to any 3D modeling, might look like:
    procedure drawavatar(r)
        # place randomly in room r
        myx := r.minx + ?(r.maxx - r.minx)
        myy := r.miny
        myz := r.minz + ?(r.maxz + r.minz)
    
        # ensure a meter of room to work with
        myx <:= r.minx + 1.0; myx >:= r.maxx - 1.0
        myz <:= r.minz + 1.0; myz >:= r.maxz - 1.0
    
        PushMatrix()
        Translate(myx, myy, myz)
        WAttrib("texmode=on","texcoord=0,0,0,1,1,1,1,0")
        Texture("jeffery-front.gif")
        FillPolygon(0,0,0, 0,1.75,0, .61,1.75,0, .61,0,0)
        Texture("jeffery-rear.gif")
        FillPolygon(0,0,.28, 0,1.75,.28, .61,1.75,.28, .61,0,.28)
        Texture("jeffery-left.gif")
        FillPolygon(0,0,0, 0,1.75,0, 0,1.75,.28, 0,0,.28)
        Texture("jeffery-right.gif")
        FillPolygon(.61,0,.28, .61,1.75,.28, .61,1.75,0, .61,0,0)
        PopMatrix()
    end
    

    Dr. J Model 0.1 - Stuck in a PhoneBooth, S3D File

    The above hardwired code calls FillPolygon four times to draw four rectangles. If we draw the exact same picture with triangles...we need 8 triangles, composed from 8 vertices. Taking a wild stab, try the following .s3d file. I did not get it right on the first try (but I was close). Note that vertex coordinates (numbers like 1.75, .61, .28) are given in meters based directly on measurements given earlier.
    // version
    103
    // numTextures,numTris,numVerts,numParts,1,numLights,numCameras
    4,8,8,1,1,0,0
    // partList: firstVert,numVerts,firstTri,numTris,"name"
    0,8,0,8,"drj"
    // texture list: name
    jeffery-front.gif
    jeffery-rear.gif
    jeffery-right.gif
    jeffery-left.gif
    // triList: materialIndex,vertices(index, texX, texY)
    0, 0,0,256, 1,0,0, 2,256,0
    0, 0,0,256, 2,256,0, 3,256,256
    1, 4,0,256, 5,0,0, 6,256,0
    1, 4,0,256, 6,256,0, 7,256,256
    2, 7,0,256, 6,0,0, 1,256,0
    2, 7,0,256, 1,256,0, 0,256,256
    3, 3,0,256, 2,0,0, 5,256,0
    3, 3,0,256, 5,256,0, 4,256,256
    // vertList: x,y,z
    0,0,0
    0,1.75,0
    .61,1.75,0
    .61,0,0
    .61,0,.28
    .61,1.75,.28
    0,1.75,.28
    0,0,.28
    // lightList: "name", type, x,y,z, r,g,b, (type-specific info)
    // cameraList: "name", x,y,z, p,b,h, fov(rad)
    

    S3D Rendering, Version 0

    Here is naive code to achieve the S3D drawing.

    Design note #1: parsing and rendering constitute enough behavior to go ahead and make a class (or maybe a built-in) out of this. However, Jafar has written far more sophisticated code we will prefer to use.

    Design note #2: while we can make a generic S3D renderer fairly easily, to animate body parts (legs, arms, etc), our model will need to insert Rotation capabilities at key articulation points. We will consider this and the S3D part mechanism after we get "out of the box" into a higher polygon count.

    Performance note: in "real life" there are polygon "mesh modes" that would allow several/many triangles in a single call. This is the kind of thing that using Jafar's classes would give you, over doing it yourself. Note that at one time I began planning a u3d file format as a minor simplification based on s3d.

    procedure draws3d(r)
       loads3d("drj.s3d")
       # place somewhere in room r
       myx := r.minx + ?(r.maxx - r.minx)
       myy := r.miny
       yz := r.minz + ?(r.maxz + r.minz)
    
       # ensure a meter of room to work with
       myx <:= r.minx + 1.0
       myx >:= r.maxx - 1.0
       myz <:= r.minz + 1.0
       myz >:= r.maxz - 1.0
    
       PushMatrix()
       Translate(myx, myy, myz)
       WAttrib("texmode=on")
    
       every i := 1 to triCount do {
          tri := triangleRecs[i]
          v1 := vertexRecs[tri.vi1 + 1]
          v2 := vertexRecs[tri.vi2 + 1]
          v3 := vertexRecs[tri.vi3 + 1]
          Texture(textureRecs[tri.textureIndex + 1]) |
    	 stop("can't set texture ",
    	      textureRecs[tri.textureIndex + 1])
          WAttrib("texcoord=" || utexcoord(tri.u1,tri.v1) ||
    	      "," || utexcoord(tri.u2,tri.v2) ||
    	      "," || utexcoord(tri.u3,tri.v3))
          FillPolygon(v1.x,v1.y,v1.z, v2.x,v2.y,v2.z, v3.x, v3.y, v3.z)
          }
       PopMatrix()
    end
    

    Getting Rid of the Telephone Booth

    Jeffery is tired of living inside a phonebooth. It is even more confining than when he is a Lego-Man. For the next step, what he needs is a way to specify a bunch of vertices fast, in a manner conducive to S3D files. Furthermore, if he clicks the "same" vertex in both a front/rear texture and a left/right texture, he'll be able to (a) acquire (x,y,z) coordinates for that vertex, and (b) acquire (u,v) coordinates for that vertex to use for both front/rear-facing AND side-facing triangles. This is still primitive, but there is some hope that we can build a simple tool for it.

    Chad and Eric Say ...

    The ideas in this section are borrowed from "Game Modeling Using Low Polygon Techniques", by Chad and Eric Walker, Charles River Media Press, 2001.

    1. You should start by learning to draw, and draw detailed sketches of front, side(s), and rear of your character.
    Since we aren't artists, I will settle for digital photos. But you could substitute a drawing of yourself if you preferred.
    2. You should draw a polygon outline (profile) from the side view. The number of polygons might initially be low (20-30?). Each is an (x,y).
    3. You extrude yourself by taking each (x,y) and making a right and left side (x,y,z) from it. z's can be taken my measuring your front view width, or measuring yourself in RL.
    4. You map textures
    5. You refine iteratively by editing vertices and adding surfaces.

    .X File Format

    Pointers from Jafar:

    Thoughts on a 3D Pirate Ship

    Next let's see what the Rabin chapter on Character Animation has to say about 3D Modeling. Chapter 5.2, Character Animation (Chapter 5.2, original).

    (lecture covered slides 1-14).

    Another .X model Example

    u3dview:

    from: warrior.x and warrior.gif

    Discussion of u3dviewer

    Unicon's uni/3d/viewer application shows how to pull a 3d model from a .x file and render it within a 3D application such as the JEB demo.

    Reflections on "A/I week"

    Rabin: Character Animation (cont'd)

    What are the important parts of Chapter 5.2, Character Animation (Chapter 5.2, original).

    Multi-user Games

    Beyond two players sharing a keyboard or a couple of joysticks, most multi-user games employ network communication in order for players to interact in the game. Let's pre-test your network programming knowledge against the following buzzwords:

    While this course isn't about networks, games use them and it is appropriate to provide a brief introduction to network programming. Especially if you have never done network programming before, you should read Chapters 5 and 15 of the Unicon book for a discussion of network programming in Unicon. For other languages you wish to use, you should seek out (and try out) their comparable functionality.

    The main networking buzzword: Protocol

    Q: So, what is a network protocol? A: A network protocol is the "language" by which two programs agree to communicate over the network. The format of each individual message is analogous to a file format, but it is also analogous to a set of lexical and syntax rules like those used in compilers. The sequence of messages that are allowed are analogous to syntax and semantic rules in compilers.

    Two major families of protocols

    Stream-oriented protocols are usually more human-readable, with ASCII text line-oriented message formats. For example, HTTP protocol sends its headers as a sequence of lines with a easily readable format like:

    Fieldname: value
    Fieldname2: value2
    
    ... ending with a blank line, after which the data payload follows.

    Scalability and Multi-user Gaming

    Multi-user games have both soft and hard limits on how many users they can handle. Network programming can be easy, but naive network programming will result in surprisingly bad limits. For example, during the first year of my collaborative virtual environment project, the system degraded and failed after 5 users: things were slightly degraded but usable with 5 users, and would hang and eventually crash with 6 users. In order to raise the limits, you have to become aware of several technical limits which interact:
    bandwidth
    this is the "easiest" limit to remember, most of us know our internet connection can only transmit so many bytes per second, so transferring big files will take time. What you need to add to that knowledge is that during the same connection, bandwidth fluctuates wildly as other internet traffic varies. Also, across a WAN the bandwidth is limited by the "weakest link", so on my office machine with its gigabit network, connectivity to NMSU is only 1.2 megabits/second, and this factor of 800 slowdown varies from second to second (from 800X slower than the computer's NIC can handle to 1600X to infinity). Compressing files might take many billions of CPU instructions but still greatly speed up a large transfer; it is a typical bandwidth-reduction mechanism.
    latency
    the delay between sending and receiving packets also varies from very low to very high. Typical across campus latencies might be 50ms (up to 20 times per second? up to 10 round trips per second?), while latency across the internet is often in the 1.5 second range. Multiuser games have to be designed to not depend on low latency; for example, a "heartbeat" to keep all players in sync is a tempting idea, but if local client redraws waited for such a heartbeat, you would not get a high enough refresh rate for a smooth animation. Dead-reckoning is a typical latency-compensation mechanism.
    # of packets
    The bandwidth may be near infinity, the latency may be no problem, and the network may still impose limits: each packet costs the OS a lot of processing time to handle, whether it carries 6 bytes or 1.5KB or more. Packet aggregation is a typical packet reduction mechanism.
    GPU
    A GPU may easily be a limiting factor on the # of users. If graphic updates/changes are proportional to # of users, or # of polygons to be displayed is proportional to # of users, adding more users will gradually break the client's ability to update, starting with low-end non-GPU computers and working up even to high end machines. Level-of-detail is a typical means of compensating for limited GPU resources.
    CPU
    A CPU may easily impose a limit on # of users, CPU's handle core gameplay and user interaction plenty fast, but games tend to dump lots of extra work on the CPU, assuming it is an infinite resource. If you are on a low-end CPU, you may save CPU cycles by NOT doing a lot of the other limitation-reducing techniques which suck CPU resources. Switching to more efficient algorithms and data structures, and doing profiling and performance tuning are other typical CPU-saving techniques.
    Main memory
    Main memory is usually a major bottleneck in modern computing systems.
    OS
    Any time you interact with the operating system will greatly slow your program down. Whole careers have been built on the art of reducing the number of OS calls. For example in MS-DOS days the standard thing to do was to skip the OS and write directly to video memory for fast graphics. In modern UNIX and Linux systems, processes were too slow so threads were invented, and OS threads were too slow so "user threads" were invented.
    disk
    A program that spends time waiting for disk I/O may not be able to sustain game-level refresh rates. Many disk operations can be avoided by leaving files' contents in main memory, and only writing changed items out to disk periodically.

    How to Handle More Users - discussion of processes and threads

    The oldest internet models have a single-process, single-thread server that receives a request, replies immediately, and awaits the next request.

    This was immediately followed by a "fork-exec" model, in which each incoming connection triggers a new process, so that multiple users can be served simultaneously. Separate server processes for each user gives good fault tolerance (one user's server process crashing might not affect others') and poor/slow communication for applications where users interact with each other via the server.

    Since process creation is slow, "fork-exec" has been replaced by various newer models, including farming the work out to a pool of pre-created processes, and using threads instead of processes.

    Context switching between processes is very slow, and even switching between threads is pretty slow. In addition, communication between processes or even threads is slow. For these reasons, modern multi-user servers might have each thread handling several user connections -- especially if certain users tend to communicate together a lot. The number of users per thread might depend on how CPU-intensive the server threads' tasks are in support of each user -- if the server has to do a lot of work for each user transaction, it is easier to justify a separate thread for each user.

    Are Virtual Environments only for Large Corporations?

    I am not the only person crazy enough to propose garage-scale MMO development. Note that without a certain level of 3D graphics capability we cannot undertake this goal at all, and unless we find a way to make 3D graphics quite easy, it is far beyond our available resources.

    Introduction to OpenWonderland and OpenSim

    Introduction to CVE

    CVE is our homemade research CVE. It lives at cve.sf.net. CVE has been called Unicron and VIEW in the past, and may get renamed in the future. I would always like a better name.

    More on Virtual Environments

    CVEs are a preliminary, low-grade form of virtual reality as envisioned in the 1980's by science fiction authors such as William Gibson, Neal Stephenson, and others. While these authors and many subsequent movies have envisioned computer environments indistinguishable from the physical world, CVEs run on conventional computers and are only as "immersive" as one's imagination and one's computer monitor allow them to be. Intermediate forms of virtual reality are made possible by higher end 3D display devices such as the current crop of 3D TV's, and motion tracking hardware and software.

    Hard Technical Subjects in CVEs

    CVEs are potentially amazingly complex pieces of software. A CVE generally requires sophisticated 3D graphics (hard), complex peer-to-peer and/or client/server multiuser networking (hard), and a lot of application domain logic for the type of collaboration that is to be supported. CVEs may also integrated many other aspects of CS (such as artificial intelligence) to make the virtual environment richer and more useful.

    Because writing a CVE is potentially so incredibly technically challenging, there is a danger that the only people who can do it are large multi-million-dollar industry labs. In this class we are interested in CVEs as vehicles for both direct and indirect research:

    CVEs as Places for Action and Interaction

    How do people collaborate in a virtual environment? For CVE's to be useful, their concept of space needs to make sense. Some rooms may be for specific activities, or specific kinds of work; others may be shared, or used for different purposes at different times.

    Collaborative Work

    The "Collaborative" part of a CVE ties it to the field of Computer Supported Collaborative Work (CSCW). This is a relatively well-established area of CS, with its own community. We will explore this field to some extent in this course. Here are some immediate implications:

    Shared Context

    Seeing what each other is doing; seeing each other's past; shared access to data; shared space in the 3D environment.

    Awareness of others

    See not just that other users are there, but what the other users are doing, especially when it affects or relates to what you are doing.

    There is foreground awareness and background awareness. Things should be in the background unless/until they start interfering with what you are doing.

    Background awareness may include users' real locations and schedules, whether they are at their keyboard and looking at the screen at the moment, what task they are performing, etc.

    Communication

    There are several dimensions to the direct communication between computer users. Communication can be textual, graphical, and audio/video. It can be in real-time or recorded for later. Things like tone of voice, hand gestures and eye behavior can significantly affect real conversations; how can they be approximated in computer-based communication?

    CVE's and Entertainment

    Games have driven many of the recent advances in computer graphics, and CVE's are no exception. Videogames like Doom proved that 3D applications could be highly immersive even without photorealism. MMRPG's such as Everquest have proven the potential of CVE's far more convincingly than the research products discussed in our textbook and the CVE conferences.

    With a compelling proof-of-feasibility like Everquest in mind, we cannot help but believe that a CVE will soon dominate many fields of remote communication and endeavor. It is only a matter of time before CVE's are used for distance education, virtual dating and sex, live theater, circus and other public performances, as well as major meetings such as conferences, associations, and the activities of governmental organizations.

    Content Creation

    There are two types of user-created content as envisionable in current and future virtual environments. You can think about this from the point of view of the game author (e.g. Blizzard, or you doing your HW) or from the inexorable web 2.0 point of view: interactive, end-user creation.
    visual/graphical content
    composed from 3D primitives, includes both static and dynamic/behavioral content. Our conspicuous examples are SecondLife and Minecraft.
    things to do in-game.
    part of this is game design/mechanics/coding. When is it content creation? We mentioned a few examples of this previously, such as City of Heroes' architect facilities.
    I'd like to think about both.

    One of the reasons to study content creation is to test its limitations and see what ideas ought to be present in future virtual worlds we might build.

    Creating Activities

    The only thing better than end-user world-building is end-user activity building. In principle, there should be a range of mechanisms for end users to create things to do in-game. One conspicuous way to do this is to allow end-users to create "quests", and this has various possible implementations, but I would like to brainstorm a little for other ways.

    City of Heroes' Quest Creation Tools

    So far, I have only tried going on someone else's quest. Sure enough, it is like an instanced dungeon. By the way, end users are liable to make impossible quests.

    JEB3

    www.cs.uidaho.edu/~jeffery/courses/game/jeb3.zip contains a many-room aggregation of JEB formed by a prior Games and Virtual Environments class, which was later refined and incorporated into CVE. It may lack stairwells and connections between rooms, some of which were later added to CVE.

    Avatars in CVE

    CVE's Avatar class was designed originally around hardwired "lego-man" graphics that employ a small number of OpenGL primitives. Jafar was kind enough to tuck our 3D model-based avatar graphics into the existing class structure, but there are interface questions. As originally written, an avatar consists of a vastly simplified bone structure, with legs and arms having hands and feet, but not (for example) knees and elbows. A previous student group semester project did add knees and elbows but up to now their work hasn't been merged into the main source code base. The programming API for avatars includes the ability to have them move arms and legs in very simple ways to approximate walking, raising one's hand, and pointing, but a 3D model might or might not have a bone structure, might or might not have a pre-existing animation for walking or raising one's hand, and typically will not do both at once, or support pointing in arbitrary directions. What to do?

    Proposal: add 3D model file "parts" for each avatar body part in the model. Write a new subclass of Avatar and of Body Part to work off of (and be populated from) the S3D data.

    Notes on the .dat parser/model constructor

    The .dat file format was originally a single file; in CVE it is currently split into two files (dat/nodes/model.dat and dat/edges/static.dat) but this was a mistake introduced by a student. There are tremendous advantages to keeping a single-file format. The farthest we got into the parser before was to see that the world builder sets up a big string scanning job (of the entire .dat file) and when it finds "Room" it calls a procedure parseroom() and so on. The parser is a top-down recursive descent code which builds a set of interconnected objects. Procedure parseroom first calls some generic parsing code which populates a table with all the named fields of the room:
    procedure parseroom(s,f)
    local t, r
       t := parseplace(s,f)
    
    It then builds a room object:
       r := Room(t["name"], t["x"], t["y"], t["z"],
    	      t["w"], t["h"], t["l"], t["texture"])
    
    Mapping the table t (all contents of the .dat) to the Room r is a double-edged sword. On the pro side, fields in the .dat can be in any order, and extra fields cause no harm. (If a field is missing from a room, the Room constructor had better have a default it can use.) On the con side, an extra memory copy is happening here that could be avoided if the instance itself were passed in and populated. parseplace() is highly polymorphic (one code used for many types of objects composed from fields) and that would complicate its internals.

    Procedure parseplace() builds the table (a set of fieldname keys and associated values). The place is terminated by a "}", when it is by itself and not part of a field (parsed here by parsefield()).

    procedure parseplace(s,f)
    local t, line
        t := table()
        while line := readlin(f) do line ? {
    	tab(many(' \t'))
    	if ="}" then break
    	if &pos = *&subject+1 then next
    	parsefield(t, tab(0), f)
        }
        return t
    end
    
    parsefield() grabs a field name (delimited at present by space/tab characters), which will serve as a key in the table. It then calls parseval() to parse a value, which may itself be a complex structure.
    procedure parsefield(x,s,f)
    local field, val
       s ? {
          tab(many(' \t'))
          (field := tab(upto(' \t'))) | {
    	  write("fieldname expected: ", image(tab(0)))
    	  runerr(500, "model error")
          }
          tab(many(' \t'))
          val := parseval(tab(0),f)
          if field=="texture" then val := world.find_texture(val)
          if (field == "action") then {
    	 /(x["actors"]) := []
    	 put(x["actors"], 1)
             }
          x [field] := val
        }
    end
    
    A value by default might simply be an arbitrary string after the fieldname, extending to the end of the line. There are three special cases which have more complex semantics: a numeric constant, a Wall object, and a list.
    procedure parseval(s,f)
    local val
       s ? {
            tab(many(' \t'))
    	if val := numeric(tab(many(&digits++"."))) then return val
    	else if ="Wall" then return parsewall(tab(0), f)
    	else if ="[" then return parselist(tab(0), f)
            else return trim(tab(0))
        }
    end
    

    If we chase inside parselist() we would find that other virtual objects must appear inside a list object, while Walls do not. It seems odd (and bad design, basically) to single out Wall() here as a special syntactic entity.

    Dr. J should add additional notes here on parsewall() and parselist().

    The recommended way to test your room data + textures is to run your sample data on the jeb1 demo. At least one student reported the jeb1 demo not running for them on Windows. If ran for me on Vista laptop and on XP on VMware on Linux... but if you are having difficulties, see me for help, or try another machine. Oh: what image file formats are you trying to use? .gif is safe. .jpg and .png are "maybes". libjpeg worked on linux and not on windows last time I checked. Jafar claims we have libpng support, but not sure if that's gotten built into windows yet or not, either.

    Virtual CS Project Update

    Earlier I pointed you at jeb1.zip as a demo; there is a longer demo named jeb2.zip. jeb2 has a program named modview that helps with x-z coordinate calculations, given a floor map of the building. Idea: the .dat file reader should add some semantic error checking, besides needing better syntax checking. For example, if an obstacle or decoration of any type extends beyond its room boundaries, this might be flagged as an error.

    The Server and Protocol

    Up to now we have been single-user and a game engine would have served us better. CVE has connections to a server, with chat, avatar interaction, collaborative IDE, etc. Unicon's networking capabilities basically boiled down to: almost as easy as reading and writing text to local files. Let's talk about the server some more.

    Developing n-User Chat Capabilities

    From "chatting" we have to make the big leap to seeing each other. But let's start with just "chatting". Some notes about this demo:

    Server State and Network Protocol

    What information is needed on the server? How shall the server store that information in nonvolatile memory?

    System Challenges for Multiuser Online Virtual Environments

    Understanding Network Requirements for CVEs

    Idea: in virtual classroom, people sit down -- that means the network and server don't have to worry about transmitting movement events while they are mainly transmitting audio and/or video instead. Could this be exploited? Better VOIP if you get everyone to hold still?

    Distributed Architecture Implications

    CVE Extended Demo

    Today's demo will just try and visit every menu item and every function that we can manage, to see how many times we can get things to crash, and how many features/functions we can use successfully. We will keep a running count.

    CVE: Following Execution Across the Net

    When an avatar is moved, besides the 3d display list graphics tweaks, protocol strings corresponding to those moves are "queued up" to be sent in a batch over the network, and marking that the local client world needs refreshing. Some number of
    	 put(grouping, moveuid || "part " ||  name || " " || dir || " " || ang)
    
    are followed (at the end of actions()) by a call to flushnet()
       method flushnet()
          if session.isUp() then {
             session.Write(grouping)
             grouping := list()
    	 }
       end
    
    Session's Write() method bundles up a list of strings as a single string, so it gets sent as a single packet. Where does it go then? The server receives these commands... and does what?
    # server.icn::run()
             if not (L := select( socket_list, Ladmins )) | *L=0 then next
    ...
                      if buffer2 := Tsock_pendingin[sock] || sysread( sock ) then {
    ...
                         buffer2 ? {
                            while buffer := tab(find("\n")) do {
                               ExecuteCommand( sock )
    ...
                "move":   {
               dynStHandler.saveAvatarState(Cmds,sock,Tsock_user, parsed[2])
               dynStHandler.getRecepientUsers(Cmds, sock, Tsock_user,
                              Tuser_sock, TrecpUser_sock,
                              "AvtMove",parsed[2])
               sendtoSelected(sock, TrecpUser_sock, parsed[2], "move", 1)
    
    Saving state involves writing to server local disk. getRecepientUsers is another matter.

    CVE Source Code: the Next Level of Detail

    Let us take a look at: nsh.icn, nshdlg.icn, nsh-world.icn, and the scene graph files.

    Biologically-inspired algorithms in game A/I.

    Reading Assignment

    Network Protocols for CVE's (Greenhalgh)

    Lecture XX. Future Trends in Games and Virtual Environments

    Presence in Shared Virtual Environments (Buscher et al)

    Should different vendors' virtual worlds know about each other and interoperate? IM systems have gradually veered towards being able to send messages across platforms... Should our model of other users in the CVE include sending and receiving messages via regular internet e-mail and such? In the old days, on a shared UNIX system where the entire department used a single machine, I could easily tell who was logged in, whether they were away from their keyboard (idle time), and maybe even what application they were running. If I had that user's cooperation, how easy would it be for me to check whether one of you was using a computer (not in the CVE)?

    Methods of locating someone else in virtual space. Some of these are graphical, some textual, and some could be either. Perhaps some CVE's would eschew some of these techniques in order to be "realistic". What other methods can you think of?

    "corpsing" -- when someone is away from their keyboard, but you have no way of telling that.

    The Problem with 3D Environments

    Navigating in 3D quickly becomes so hard that we can't concentrate on other tasks. I have seen this recently in 3D modeling tools, and have seen it before as a challenge to people trying to design a 3D virtual mouse widget for navigating. How do we solve this problem? Conundrum: put people and information together in collaboration spaces that inform people about what is happening...without preventing us from doing our real work.

    Themes

    Avatars

    How users represent themselves to each other; the degree of realism and level of detail available for communication purposes; the presence of computer-controlled virtual persona.

    Bandwidth, scalability, and fundamental limits of hardware

    The level of network connection, the number of users, and the capability of graphics hardware are all intertwined to affect the quality and usefulness of a CVE system. I am also interested in the complexity and difficulty of writing CVE software as a limiting factor, and of developing languages and tools to reduce that difficulty so that richer CVE's can be developed.

    Augmented virtual reality

    How much real-time real world information can be made visible in a CVE?

    Obvious Things About Commercial Game Virtual Environments

    Unobvious Things About Game CVEs (MMOs)

    "Environment" Imposes Some Essential Features on CVE's

    Last lecture, CVE's were portrayed as multi-user 3D applications. But not every multi-user 3D application has to be a CVE, and based on the following feature list, some non-3D applications are "almost" CVE's.
    a CVE has a virtual world
    This is a large space, typically with many locations and objects. Its goals are: make the user feel they are in that place, and make the user feel that within that place, they can accomplish their goals.
    a CVE has time, and persistence
    Things that happen in one CVE session affect later sessions. There is no "pause" button; the virtual world is happening whether you are there or not; things happen while you are out. Users do not ever "start over".
    a CVE is reactive
    CVE's content is user-controlled. Users do whatever they want, rather than following a script.

    Additional "Desirable" Properties of CVE's

    a CVE should be "realistic" as possible
    laws of physics, day/night, weather, and graphical realism are all examples where the CVE doesn't have to work the way the real world does...but most CVE's will be more "immersive" and understandable to users if they reflect the real world.
    a CVE should scale to handle "as many users as needed"
    other people are a primary aspect of our real environment, and allowing only 4 people, or 16 people, isn't very realistic
    a CVE should be richer than necessary
    is it a "place", or just a chat/teleconferencing program?

    Anatomy of a CVE

    What technical pieces must be in place to make a CVE?
    Users must be able to communicate
    This implies Internet or other transport layer, plus common language or robust translation capability. "Language and translation" apply to both the network communication protocol and the human users.
    Clients must be able to "render" the virtual world usefully
    This implies sophisticated graphics software and a minimum hardware platform. In this class, I believe we can obtain or write sophisticated graphics software, but let us ask: do we all have access to minimum hardware?
    Users' views of the virtual world must be (somewhat) consistent
    There is a big problem if the time grows too long between your doing something and the time other users see what you do (latency). Some actions are more important than others in this regard.

    New CVE Concept of the Day: Goal Assistance

    We didn't see, in the City of Heroes demo, an example of a mission, but essentially a "mission" is a goal the game offers to the player: if they complete a certain task (maybe delivering an item to someone who needs it, or finding some lost artifact, or defeating some villain), they will receive a reward.

    In real-world-based CVE's, there may not be "missions" but there may still be goals, such as: complete a homework assignment so that it passes an automatic submission tester.

    One interesting point is: people cannot always remember the details of their goals: where to go, what to do, etc. They sometimes wind up writing down the instructions they were given by a (computer-controlled) character in the game. It is sort of obvious that the computer should provide assistance with this task, provide some of the capabilities of a Personal Digital Assistant, such as a Todo list. City of Heroes does this rather nicely.

    Scalability of CVE's

    Around the year 2000 the scalability limits reported in [Churchhill, Ch 2] were something like 8-64 "mutually aware" users. Everquest zones are not dissimilar, somewhere between 50 and 100 users, the zone performance becomes unpleasant.

    Robinson et al distinguish between upward scalability (more people) and sideways scalability (different people). Besides scaling users and groups, they argue for more different kinds of objects in CVE's, especially objects with real-world presence (machines, printers, files), where manipulating the object in the CVE causes real-world work to get done.

    Should all this work happen inside the CVE? Robinson argues for the 3D part of a CVE to be only one of many different collaboration programs, complementing other forms such as document viewers, web, and audio/video connections. In support, they observe that the 3D CVE's usually overemphasize the people, while other applications usually underrepresent them.

    They are arguing that all our mainstream applications should become CVEs and propose a VIVA architecture along these lines. There are many pros to this approach, such as accessibility and interoperability when 3D graphics are not available, from the web or a PDA, etc. What are the drawbacks to trying to make all our regular applications CVE's? Do Robinson et al identify those drawbacks?

    Videoconferencing: a perpetual prince, but never a king?

    "Phone and email continue to grow exponentially, while videoconferencing use remains flat" - why is this? Are CVE's bound to have the same lack of adoption as videoconferencing? When video is getting used, it is not to look at the people but to look at the objects they are working with. What does this say about CVE's? Objects and conversations about them need to be seamlessly connected.

    Other ideas:

    Besides "master servers", VIVA uses at least 6 kinds of special-purpose servers. Traditional services of "VR servers": spatial data processing, collision detection, awareness and authorization services, environment partitioning. Dynamic repartitioning is seen as central to scaling to more users.

    [Snowdon96]: A review of distributed architectures for networked virtual reality. Virtual Reality: Research, Development, and Applications 2(1), 155-175. gives a reference architecture consisting of:

    Where we are at in the course so far

    We haven't found all available CVE's that are out there on the internet, but we have found a number of them. Good news: I will continue to award points on HW#1 for new sites or tools you find. Bad news: I will not consider your HW#1 completed until you have tried out some CVE from the sites we've found, been online within the CVE long enough to gain some experience with its graphics and communication facilities, and report on your experience in class.

    Notes while trying to test your HW#K

    What, Binaries?
    Of course I am very uninterested in binaries, I want source code and any resources (e.g. .gif files) bundled up (.zip is probably best, .tar or .tar.gz or other common formats OK).
    Missing makefile?
    Turn in everything I need to compile and run your program correctly, if you can. This would include a makefile or batch file in the .zip that you turn in.
    Runtime error?
    It is not uncommon in Unicon to get runtime errors, don't be panicked by them but do get practiced at reading them. A sample one is
    Run-time error 107
    File cve.icn; Line 244
    record expected
    offending value: &null
    Traceback:
       main()
       make_SH167(...parameters...) from line 73 in please8.icn
       Room_add_door(...parameters...) from line 19 in please8.icn
       {&null . coords} from line 244 in cve.icn
    
    Now, here are some screen shots from last year for tools I was able to run:

    Lecture 31. Future Trends in Games and Virtual Environments

    CVE's Using Symbolic Acting (McGrath/Prinz)

    Symbolic Acting

    You don't have to control your avatar's appearance and gestures, the system does it for you, based implicitly upon your activities. The avatar represents you to others (its automatic actions symbolic of your real actions). Example: when a window is on top of your 3D view, your avatar appears to be reading a document. Put people together based on subject matter: If you start editing a .c file, your avatar might automatically head to the virtual C lab, so others working on C can notice you. Alternatively, you might put people together based on their activity (analogous to meeting by the copier, or meeting at the drinking fountain).

    In between silence and talking there is a continuum comprising mutual sense of presence, body movement, mutual gaze awareness, and the trajectory of body motion (towards someone, away from them, on a route unrelated to them, etc.). "Sleepy mode": avatar looks like an ice cream cone.

    Contact Space and Meeting Space: the lounge versus the seminar room. The big difference is whether others can interrupt.

    Nessie world: different rooms for different working contexts. Avatars are Lego puppets. Agent avatars signal time of day (waiters, janitors?), active virtual furniture shows external values and activities (temperature? stock values?). Projecting the CVE in the background: on the wall, or maybe on the root window?

    Experience results: the "meeting space" won't be the focus during meetings, the focus is on the material presented, documents being reviewed, etc. People will want to customize their avatars, but do not need (or want) them to look exactly like in real life.

    Who is talking in the CVE? Small window size means this vital information may need to be exaggerated.

    When is symbolic acting unfortunate: when it embarrasses you publically, because you want to quietly working on something else (say, surf a website) while in a meeting in the CVE in another window. Sometimes you don't want the system reporting your every action to others!

    More issues: security can be a problem; no one wants to have another account to login to; contact space needs to cooperate/integrate with e-mail, telephone, etc.; contact space needs to be accessible via PDA/cell phone, etc.

    The Forum is not just chat, it is the ability to comfort, monitor, increase awareness, and observe others.

    Dr J's idea of the day: "selecting" (clicking on) another avatar with your mouse might send an acknowledgement message to the other person, to let them know you are looking at them and they have your attention, as a precursor, if they wish to chat.

    More on Modeling 3D Spaces

    So far, you have constructed simple models of a single room, and we have (perhaps by today) normalized our coordinates such that, if we did it right, we could stick all the rooms into a single application and form a space with a number of rooms. An additional major topic will be: how to create 3D objects and avatars, and implement a persistent state for the virtual world.

    This course is not about 3D Graphics: we won't be covering advanced algorithms for photo-quality rendering like they use in the CG movies. But, everybody can learn enough 3D graphics to be useful for your project.

    Some 3D Geometry

    At some level every object in the 3D space, including floors, ceilings, and walls, must be represented in a geometry system as a set of polygons, each of which has a set of (x,y,z) points and some attributes to specify its physical appearance, such as color and texture.

    In practice, complex objects are composed from simpler objects. Each simpler object that is a part of the more complex object is given by specifying their location and orientation with respect to the complex object. This is how, for example, you might attach an arm to a torso, or attach different pieces to a table or lamp or whatever.

    Location and orientation are more generally given by the operations Translation, Rotation, and Scaling. A basic result of early work in computer graphics was to combine and apply all three operations via a single matrix multiplication. We don't have to write the matrix multiplication routine, see CS 476 or a linear algebra class for that. We can just enjoy the fruit of their labors as manifested in our 3D graphics library (OpenGL) and a higher level API built on top of it.

    At some point the "outermost" objects (say, an entire table or an entire person) are placed into the virtual world by similarly specifying the object's location and orientation with respect to World Coordinates.

    Rendering an object in room coordinates example:

    PushMatrix()
    Translate(o.x, o.y, o.z) # position object within World Coordinates
    o.render()		 # object rendered in Object Coordinates
    PopMatrix()
    
    Note that if a subobject is rotated relative to its parent object, the rotation will look crazy unless the subobject is first translated to the origin, then rotated, then translated back to its intended position.

    Drawing Primitives

    There are a lot of drawing primitives available besides the FillPolygon() function we have used almost exclusively up to now: DrawCube(), DrawCylinder(), DrawDisk(), DrawLine(), DrawPolygon(), DrawPoint(), DrawSegment(), DrawSphere(), DrawTorus().

    These primitives are further flexified by the Scale() function. When stretched (via scaling), primitives like DrawCube() can handle any rectangular shape.

    Lighting and Materials

    Not all objects need to be textured. In fact, given how expensive textures are, and how limited a resources they are, we probably ought to avoid textures except where they are necessary, i.e. when an object has a mixed or rough surface texture, or when we have a special situation.

    OpenGL has 8 lights, which can be turned on or off, positioned at specific locations, and can feature any mixture of three different kinds of light: diffuse, ambient, and specular. Diffuse seems to be the dominant light type, with the others modifying it. In the example:

       WAttrib(w,"light0=on, ambient blue-green","fg=specular white")
    
    Objects would look their normal (diffuse) color given by their foreground ("fg") attribute, except there would be a bit of blue-green on everything from the lighting, and objects that have very much shinyness (read your manuals!) will reflect a lot of white on the shiny spots.

    In addition, if you are not using a texture, the "fg" attribute for an object can include an object's appearance under the three kinds of light, and can include a fourth kind of light, emission, where the object glows all on its own.

       Fg(w, "diffuse light grey; ambient grey; _
              specular black; emission black; shininess 50")
    

    One thing that was added recently to the 3D facilities is the ability to blend the texture and the fg color when drawing an object ("texmode=blend"). One thing that is going to be added in the future (as soon as I get a student to help) is to add a set of predefined / built-in textures ( "brick", "carpet", "cloth", "clouds", "concrete", "dirt", "glass", "grass", "grill", "hair", "iron", "marble", "metal", "leaf", "leather", "plastic", "sand", "skin", "sky" "snow", "stone", "tile", "water", and "wood").

    Introduction to Networking in Unicon

    Today we start on networking. In order for our CVE's to be collaborative multi-user applications, we must tackle the networking communication aspects.

    Reading: Unicon book, chapters 5 and 15.

    Networking support in Unicon was designed by Dr. Shamim Mohamed (Logitech, Inc. of Silicon Valley) with a little help from Clint Jeffery, implemented for UNIX by Shamim Mohamed, and ported to Windows by Li Lin (M.S. student) and Clinton Jeffery. These capabilities are simple, easy to use communication mechanisms using the primary Internet application protocols, TCP and UDP. Unicon also has "Messaging facilities", providing support for several popular network protocols at a higher level than the network facilities (HTML, POP, ...), done by Steve Lumos (M.S. student).

    Networking for Non-nerds

    The Internet is very simple (ha ha), it is just a connection between all the machines that are connected, and a set of routing rules for how to deliver messages. Not counting the routers that just pass messages around in the middle, there are fundamentally two classes of machines: clients which mainly initiate connections on behalf of users, and servers which provide information. Messages are routed through the internet using IP numbers. Clients' IP numbers are often transitory, and used only to connect internally to a gateway within an organization in order to start an outgoing information session. Servers usually have a fixed IP number, visible either only within an organization behind its firewall, or on the public Internet where they are subject to hack attacks of all types, but where as a group they constitute the main value the Internet provides.

    Besides the IP number identifying a particular machine, most Internet services specify a port at which communication takes place; the ports serve to distinguish different programs or services that are all running or available on a given server. The ports with small numbers (say, the first few hundred) have standard services associated with them, while higher numbered ports can have arbitrary server-defined associations to custom applications. Ports providing standard services can usually only be run by the administrator of a machine; ordinary end users can generally use higher numbered ports.

    Client-server and Peer to Peer

    A peer-to-peer system is just a system in which clients are servers. This only works if clients' IP numbers are visible to each other. Peer-to-peer systems generally have to use a central server to find each other, and possibly to forward information back and forth between two clients neither of whom can receive incoming connections due to firewalls.

    Configuring CVE's for Object-Focused Interaction (text Chapter 7)

    OK to skim this chapter.

    Startup Time

    A lot of the load time for please8 is apparently decompressing and RESCALING the GIF textures. We can speed things up dramatically by trimming our textures smaller, choosing powers of 2 (not 640x480, 512x512 or 512x256), and possibly including the textures in the executable or in an uncompressed (or less-computationally-intensive compressed) format.

    Tiling textures

    To Shrink your textures you must tile them. To tile them, use texture coordinates > 1.0. For example (0,0, 20,0, 20,20, 0,20) repeats a texture 400 times over a rectangular surface. Our walls and related classes need a parameter to let us tweak this number; furthermore, each texture has an ideal size (say, 1x1 meter, or 0.1x0.1 meter, or...) and the tiling factor should be automatically calculated from region size divided by the texture ideal size. A 6.5x3.0 meter wall would want tiling factors of 65 and 30 for a 0.1x0.1 meter texture. Note x and y factors are different. The presence of ideal x- and y-size factors in addition to the actual image, suggests a texture database and a texture class will be useful to us.

    Making Textures Tile Properly

    Viewing Volume

    In my multiple rooms prototype, I noticed in longer corridors that I was not rendering the whole scene, the far away stuff was clipped. It turns out the Unicon runtime code needs to be extended to give us control over the "viewing volume", and that extending it to so faraway stuff is trickier than it sounds.

    Project Ideas

    How Not to be Objective (text Chapter 8)

    It is OK to skim this chapter.

    Subjective virtual environments

    Shaping landmarks to make certain encounters more frequent; users with different access levels to data; users with custom views/displays; task-specific displays (e.g. an electrician's view of a building); multi-lingual CVE.

    Flexible Roles in a Shared Space (text Chapter 9)

    Please read this chapter

    "Kansas" - 2D, programming environment for the language "Self". Self is a delegation-based descendant of smalltalk.

    "Field of view in most CVE's is so narrow that other avatars are usually off-screen".

    Capabilities - an old concept from the 60's. A capability is an object representing the right to access a protected object. Capabilities can be passed/delegated to others. Capabilities store what kind of access is granted, plus a reference to the protected object; they follow a transparent forwarder, or facade paradigm.

    Capabilities could be used anywhere in a CVE, but perhaps the user interface is a sufficient place for them. A capability can itself have a visible manifestation (like the piece of chalk, or the microphone). A "capability tray" might hold (and allow easy sharing) of a user's entire capability set.

    Avatars

    We have already talked some about avatars this semester. Here are some additional thoughts on them.

    Social Interactions in Multiscale CVEs

    (you should read this article, from the ACM CVE 2002 conference). Furnas is a major pioneer in the Computer Human Interaction community. This lecture reflects my thoughts while reading their article.

    social presence
    how you look shapes/influences how others will treat you (social conventions)
    Yo's avatar
    Yo Sep has got an avatar that with a single stroke (head images) achieves more reality than the CVE in this 2002 paper.
    ants and giants
    it is easy for us to exaggerate differences in size, if we have a reason to do so. If you want to see microdetails, become an ant. If you want to walk from here to Albuquerque and see the Organ and Jornada ranges in a single hike, become a giant.
    multiscale collaboration
    for large complex structures, different collaborators may need to work/view the world at different scales (seeing different levels of detail). Example: software architects vs. designers vs. coders
    dynamic sizes
    a user might shrink or grow themselves to fit a situation
    size in social domination
    bigger individuals tend to dominate social situations
    size in natural life
    We all start out life as "newbies", and gain in size and ability. Ability in a CVE might include: movement speed, viewing distance and detail, having keys to certain rooms...
    "size" in unnatural CVE life
    More avatar upgrades might include ability to teleport, walk through walls, fly, etc. Besides size, avatars with more abilities might stand out via color, glow, louder (Jurassic Park-style) strides...
    communication-centered vs. artifact-centered collaboration
    CVE balances and supports both, but it is artifact-centered collaboration where it will shine or fail.
    but avatars need (in general) to be visible
    bad things happen in CVE's with invisible users
    external min and max?
    others' view of you may need to be capped, even if your own view of the world scales wildly
    scaling is more than just calling Scale()
    in general, smaller = render fewer details please. wire frames?
    proxemics
    study of proximity. Asymmetrical proximity when avatar scales are varying. Giant will "feel" that ant is far away, ant will "feel" that giant is real close.
    proximity ranges:
    • intimate = < 0.45m
    • personal = 0.45-1.2m
    • social = 1.2-3.6m
    • public = > 3.6m
    Two (or more?) avatars?
    One for action, one for conversation? Daemon avatar, a placeholder for conversation (intercom) while one is elsewhwere.
    Changing viewpoints
    See from your daemon's eyes; see from others' eyes to know what they are looking at/talking about/referring to.
    Scale-based semantic representations
    Besides "adding detail" as a user shrinks and gets closer to what they want to look at, the representation may change (solid→molecule→atom→particle). The hard part would be to see relationships between objects at different scales/ representations.
    imposters
    at a distance, a much cruder representation of an avatar works fine

    Designing Interactive Collaborative Environments

    Discussion from experts including the "DIVE" folks, one of the most prominent CVE research groups, from Sweden.

    WWW3D and Web Planetarium

    WWW3D and Web planetarium are an example of "abstract" CVE in which 3D-ness is not based on ordinary "world" geometry. Primary goal: aid navigation by viewing multiple sites of interest. Each Webpage = 1 sphere. Manual and automatic 3D layouts of these spheres.

    Big problem with scalability (too many pages to view). Cluster pages together into 1 sphere per site.

    WWW3D shows very little page contents, mainly shows links; color codes pages by how recently they were viewed. Web planetarium uses the first image in the page as a texture (often a logo or person).

    From public demo: users avoided following links to "warp" new sites into the 3D layout. They preferred to wander around a landscape that is already created.

    The Blob

    "When something is truly engaging, it can take on a life of its own and be appropriated for applications beyond its original intended application."

    Robot-Human collaboration

    Is mixed/augmented reality an application of CVE's?

    Tele-Immersive Collaboration in the CAVE Research Network

    Please Read the above paper.

    Lessons:

    Future Work for the CAVE Folks

    What I learned from this year's Halloween

    This year I went to a Halloween house of extraordinary magnitude, in which mounted next to the front door there was a Shrek Style magic mirror. The human operating from behind the mirror would chat with kids who were trick or treating. In their version, the human operated a push button in sync with his talking in order to make the magic mirror's mouth (which was just a black diamond that opens and closes). With a little practice, it looked fairly convincing. My thought: avatar's mouths can be automatically driven by audio amplitudes and/or chat command text.

    Managing Dynamic Shared State (SZ Chapter 5)

    Goal: users see the same thing at the same time.
    Key consideration: keep a consistent view of dynamic shared state.
    Minimum: user position and direction
    Maximum: entire world dynamic and may need to be updated

    consistency vs. throughput

    "by the time Joe's location arrives at mary's machine, it is already obsolete"

    it is impossible to allow dynamic shared state to change frequently and guarantee that all hosts simultaneously access identical versions of that state"

    shared repositories

    If the server owns the dynamic state, and clients merely request changes to it, then all clients can be kept consistent...at a high price. The simplest version maintains state in regular files, and omits a server entirely, using NFS or a similar filesystem to make the state available to clients. Slow. Limited users. Version 2 might be a SQL database, which would probably scale better than simple files and NFS. Each operation would not involve opening files, which is expensive. Version 3 would be: use a server, and leave entire dynamic state in its main memory. (Q: is our current CVE using this model?) Issues: server crash may lose state unless it gets written on each update. TCP hogs resources, loses connections, limits maximum # of users.

    Variation: distributed shared repository, in which different dynamic state is managed on different machines. "Virtual" centralized repository.

    Idea: how consistent your information about others is can be proportional to their importance to you or their proximity to you; this doesn't have to be a boolean visible/too-far-away condition. What about updating with frequency proportional to distance? Server could compute: should I send user X's move to user Y? with probability = (1.0 - distance) * (1.0 - direction)

    frequent broadcast

    "blind broadcast": possibly even if the state hasn't changed? allows faster (lower latency), unreliable protocol, since lost packets will be replaced. More updates per second than shared database systems. system is potentially serverless. bad part: sucks bandwidth, limits # of such dynamic objects.

    Specific machine "owns" the object for which it broadcasts updates; more complicated for others to modify the state of that object. Works best in a LAN setting (many early LAN games used this model).

    Concept: "lock lease" = locks that automatically timeout.

    Latencies of 250ms are not uncommon on WANs.

    Jitter = variation in latency from one packet to the next.

    state prediction (dead reckoning)

    idea: transmit updates less frequently, use current information to predict/ approximate future states. transmit not just (x,y) but (x,y,vx,vy) with velocities to use until the next packet. sacrifice accuracy to support more participants and/or run on lower bandwidth connections; decouple frame rate from network packet/update rate. Requires surplus CPU be available.

    prediction = how we calculate current state based on previous packets. commonly using derivative polynomials (velocity, acceleration, and possibly "jerk"). order 0 = state regeneration technique. order 1 adds velocity. order 2 (with acceleration) is "the most popular in use today". Note: if acceleration is changing each packet, using it generates a lot of errors. Good to disable acceleration dynamically when it is not helping, maybe use it when it is nonzero and consistent for 3+ updates in a row.

    derivative polynomials don't take into account our knowledge about the semantics of the object. Separate dead reckoning for each class of virtual object?

    convergence = how we correct error. instead of "jumping" to correct, we might smoothly adjust. goal = "correct quickly without noticable visual distortion". "snap convergence" just lives with the distortion.

    linear convergence: given the corrected coordinates, predict where the object will be in 1 second. Now, compute the prediction values for the object so that it moves from its current, erring position to where it is supposed to be a second from now. (what if this runs through a wall?)

    To do better: use a curve-fitting algorithm, maybe a cubic spline.

    Reflections on a MOO scenario

    Reading Assignments

    All About Display Lists

    2D windows can remember their contents in case they need to be redrawn by keeping an in-memory copy of the entire window, a so-called "backing store".

    In the case of 3D windows, we might use a similar strategy but instead keep a display list, which is a data structure that contains all the data about all graphics operations that have been performed since the last time the 3D window was opened or erased.

    OpenGL has a display list concept, but its display lists would not be easily manipulated from the Unicon application level, so we maintain our own display list as a regular Icon/Unicon list. Each element of the list is a list or record produced as a by-product of a 3D output primitive (either a 3D function call, or an attribute that was set) written on that window. Unfortunately, the elements of the display lists are somewhat underdocumented at presents, so we will describe them in detail here.

    Display Lists blindfolded

    By brute force you can see what's in a display list as follows:
    L := WindowContents(w)
    every i := 1 to *L do {
       writes(i, ": ", image(L[i]), " -> ")
       every j := 1 to *(L[i]) do writes(image(L[i, j]), " ")
       write
       }
    

    Display Lists with the headlights on

    Each element of the display list is either a list or a record. The first item in the list or record is the name of the 3D primitive, and the remaining items are the parameters that were passed in when that call was made. Whenever Refresh(w) is called, or whenever the window system requests a repaint, the Unicon VM walks through the display list and repeats each output operation from the list. The following table summarizes the display list elements. Type is either a list with the contents as described, or it is the built-in record type indicated.
    3D Function type notes
    gl_torus(name, x, y, z, radius1, radius2)
    gl_cube(name, x, y, z, length)
    gl_sphere(name, x, y, z, radius)
    gl_cylinder(name, x, y, z, height, radius1, radius2)
    gl_disk(name, x, y, z, radius1, radius2, angle1, angle2)
    gl_rotate(name, x, y, z, angle)
    gl_translate(name, x, y, z)
    gl_scale(name, x, y, z)
    PushMatrix gl_pushmatrix(name)
    gl_popmatrix(name)
    gl_identity(name)
    gl_matrixmode(name, mode)
    Texture gl_texture(name, texture_handle:integer) internal code used by OpenGL
    Fg ["Fg", ["material", r, g, b], ... ]
    Attribute settings get put on the display list as well.
    Attribute type notes
    linewidth ["linewidth", width]
    dim ["dim", i]
    texmode ["texmode", i]
    Texcoord ["Texcoord", val]

    Flaws in the current Display List abstraction

    Yes, there are some flaws. Does the display list understand graphics contexts? It feels like a single-context model to me.

    Extending Texture(w, x) to Texture(w, x1, x2)

    Problem: you can create new textures, but how do you free/reclaim old ones?

    We will run out of texture memory sooner or later, but it needs to be later.

    tex := Texture(w, s)
    ...
    Texture(w, s, tex)
    
    Will modify an existing texture on the display list, instead of creating a new one. It will also set the current texture to tex.

    How soon? Well, I put the prototype code into ~jeffery/unicon/unicon last night, but it isn't tested or checked out on Windows yet. I'll try for ASAP.

    Things to Check out

    Working in AlphaWorld (Churchhill chapter 15)

    Alphaworld is a $6.95/mo, primarily social CVE. But here (Chapter 15) is a strong endorsement for it.

    Lessons from Blaxxun

    Lessons from AW

    Things Our CVE Needs "Real Bad"

    In Class n-User CVE Demo

    Q: When is HTTP the right protocol for file transfers?
    A: If/when it saves us any work to get our CVE prototyped.

    Q: Why and for what, do we need file transfers?
    A: User-supplied textures. New data and code files. Patching the executable.

    Features We Need to Integrate

    Features We Need to do Starting From Scratch

    Features we Wish we Could Do, But not Anytime Soon