How to detect the face of the cube where user has touched

Tutorials concerning the OpenGL® ES cross-platform API for full-function 2D and 3D graphics on the Google-Android platform.

How to detect the face of the cube where user has touched

Postby srispis » Fri Jan 22, 2010 5:48 am

Hi -
I am building an application using cube with textures. I was able to successfully apply textures to all faces of my cube with different images; thanks to someone who posted TCube earlier in this site. Now I want to detect the face where the user has touched and based on the face, I would like to take user to different activities. I googled and did quite a bit of digging, however other than information like use color selections" (or) polygon detections, I couldn't find an example which I could use directly.
I appreciate your help on this.

Attached are the files I was trying to play around.
-Sri
Attachments
TCube.java
(8.56 KiB) Downloaded 400 times
TouchRotateActivity.java
(7.27 KiB) Downloaded 312 times
srispis
Freshman
Freshman
 
Posts: 2
Joined: Thu Jan 21, 2010 3:24 pm

Top

Postby MichaelEGR » Fri Jan 22, 2010 8:51 am

This is a fairly advanced topic and there are different methods, but I have yet to even spend time finding the best solution for Android / GL ES 1.X. It sounds like you came across a few that I'll summarize. If you are asking for ready made Java/Android code that does this you may very well be out of luck.

First, there is ray casting, but it only partially answers your question, but provides some mileage. You can can cast a ray from eye/camera position to the point on the touchscreen and set up a plane perpendicular to the cube. This will find a point on that plane then you can test with an AABB (axis aligned bounding box) to determine if the cube is selected. To detect which specific side is a little more complex.

---

Color picking (could work well if each face of the cube is a different color or you perhaps render twice; requires no lighting or other effects):
http://www.lighthouse3d.com/opengl/pick ... hp3?color1
This might work the best in non-complex scenes like your example.

--

Depth picking (haven't tried this):
http://blogs.agi.com/insight3d/index.ph ... th-buffer/

---

Somewhat related and perhaps your next question:
model rotation based on user perspective (see the bottom of this article)
http://www.sunsetlakesoftware.com/2008/ ... -opengl-es
glGetFloatv is not implemented on the G1 / 1.5/1.6 OSes though I haven't checked on the Droid / 2.x You have to use something similar to the matrix tracking code in the triangles GL demo provided by Google.

----

I'm sure there may be other methods suitable for OpenGL ES, but I have yet to explore them. I'd recommend color picking at this point especially since your scene is so simple. When you have to detect a pick you can just clear/render the colored version do the check / clear the buffer / then render the actual full scene
Founder & Principal Architect; EGR Software LLC
http://www.typhonrt.org
http://www.egrsoftware.com
User avatar
MichaelEGR
Senior Developer
Senior Developer
 
Posts: 147
Joined: Thu Jan 21, 2010 5:30 am
Location: San Francisco, CA

Postby zorro » Fri Jan 22, 2010 9:29 am

Actually you can do it simpler than that, using gluProject from GLU toolkit.
This is taken from the online Android SDK spec:

Code: Select all
public static int gluProject (float objX, float objY, float objZ, float[] model, int modelOffset, float[] project, int projectOffset, int[] view, int viewOffset, float[] win, int winOffset)
Since: API Level 1

Map object coordinates into window coordinates. gluProject transforms the specified object coordinates into window coordinates using model, proj, and view. The result is stored in win.

Note that you can use the OES_matrix_get extension, if present, to get the current modelView and projection matrices.
ParametersobjX    object coordinates X
objY    object coordinates Y
objZ    object coordinates Z
model    the current modelview matrix
modelOffset    the offset into the model array where the modelview maxtrix data starts.
project    the current projection matrix
projectOffset    the offset into the project array where the project matrix data starts.
view    the current view, {x, y, width, height}
viewOffset    the offset into the view array where the view vector data starts.
win    the output vector {winX, winY, winZ}, that returns the computed window coordinates.
winOffset    the offset into the win array where the win vector data starts.

Returns
A return value of GL_TRUE indicates success, a return value of GL_FALSE indicates failure.


Basically you give a 3D point and it returns the 2D window/screen projected position. So you can compute the 2D screen points of all the visible faces's corners (in the cube's case that's between 1 and 3 visible faces), then the problem is reduced to the fact that you must find if a point (the point where the user clicks) is inside a convex polygon (here you can find several algorithms on the web). I don't know if it's the best solution, but it works. The problem is that you must feed the function with the modelview and the projection matrix. The SDK suggests that these can be obtained with OES_matrix_get extension, but i don't know if this extension is implemented in all android devices. In this case, the color pick method may be safer to use.
User avatar
zorro
Experienced Developer
Experienced Developer
 
Posts: 71
Joined: Mon Aug 10, 2009 3:11 pm
Location: Romania

Top

Return to Android 2D/3D Graphics - OpenGL Tutorials

Who is online

Users browsing this forum: No registered users and 1 guest