May 28, 2013

Graphics guy saying hi

Greetings all! My name's Lauri (cand), and I'll be working on bringing some graphical improvements to STK this summer.



Here's the initial timeline:

Weeks 1-2: Groundwork (such as a wrapper class to ease loading shaders), glow (including bloom, since it's very similar), and smoothed minimap

Weeks 3-4: Lighting, including rim lighting for the karts

Initial plan is to implement a light prepass system.

Weeks 5-6: MLAA, SSAO

Weeks 7-8: God rays, motion blur

Weeks 9-10: Water

This will include screen-space reflections, simulated caustics, and vertex
animation via scrolling textures.

Weeks 11-12: Grass, and wrapping up


If there's some part of STK that's been poking you in the eye, or some effect you absolutely must have, leave a comment - the plan's not set in stone.

After each period, I'll be posting some comparison shots, before and after, of that period's progress.

Signing out.

Google Summer of Code - Students View

After more than 2 weeks of deliberation and discussions among all mentors and admins we have finally reached a conclusion for GSoC. First of all congratulations to the three selected students:
  • Hilnius: Network Core for SuperTuxkart - implementing network multiplayer but not lobby or rewind
  • Cand: Graphical improvements
  • Uni: Networking lobby
It was a very difficult decision for us to make. For each proposed idea to us we usually had at least two students we would have loved to pick. But we had only three slots (which is for a first time GSoC organisation quite high), so we had to choose. Congratulations to Hilnius, Cand, and Uni.

Unfortunately, this means that we disappointed 75 other students. We would have gladly taken 6 or so more (but it would not have been realistic to try to mentor that many students). We hope that next year we might have more mentors and more slots, and will be able to accommodate more proposals. In case that you are interested in some statistics: here are the number of proposals we got for each of our suggestions, and the average ranking in each category (1 to 5). There were 6 proposals suggesting new things (including one that proposed to make a 3d kart racing game where you have powerups ... hello, did you even play SuperTuxKart??)


We won't have time to give individual feedback to all students, but here are some common issues we noticed in various sections. First of all, it certainly helps if you spell the name SuperTuxKart correct - we saw quite a few variations ;)  But otherwise some comments about frequent problems we noticed:

Patches

While most students had no problems with the patches, we noticed that many of the patches were not tested. A simple example is our suggested patch of replacing printf with Log::warn/error etc calls.
Assuming a patch is correct just because it compiled is not really sufficient. In one case a patch would compile on linux, but not on Windows. Reason were missing parameters in function calls - something that is easy to notice if you verify that your patch works. And people did not notice that e.g. Log::warn would already print "[warn]", so any 'Warning:' included in the text is not necessary and should have been removed. But for the record, we did not really exclude anyone because of those problems, since we stated that we mainly wanted to see that using SVN etc. worked. We did rank someone who attached a patch as image (png) ... quite low, since this showed us that the basics of a versioning system were not fully understood, and this is an essential skill any student needs to have.
Some students surprised us by taking on some rather complex bugs that were in our tracker, so we already have quite a few improvements for the next release because of this.

Battle Mode

The most popular proposal. Many proposals were suffering from missing details, e.g. not explaining what information was exported by blender into SuperTuxKart, and how it was used - they just assumed that there would be a graph. Or missing was just how a target kart or target item were selected, how the AI was meant to drive (i.e. could the existing AI code be used). Some proposals were more research oriented - and while I'd love to do more AI research work, for GSoC we had to accept proposals that had in our opinion more chances of success - last thing we want is to have a good student, with an idea that just doesn't work properly.

Race Verifier

That somewhat simple sounding idea proved to be quite difficult. Many people suggested to just include the stk_config.xml file (careful study of the code would show that also all kart.xml files needed to be included). More advanced proposals suggested to encrypt the file, in order to prevent tampering with the data. While this sounds really convincing, they are all missing the point that if you compile your own sources it is trivial to write into that file whatever you want, and use completely different values in-game. Only a few proposals suggested to actually use STK itself to verify if a race is correct, and only one proposal suggested to analyse statistically what items were collected in a race (this suggestion can easily be improved by just storing the random number seed for each box - from that number the sequence of collected items can be reproduced).
Overall, this project idea had the fewest good proposals in the end. I guess that because it could potentially be a stand-alone program many students either underestimated its complexity, or just missed the point.

Rewind

Many proposals ignored that you need to store the user events, e.g. steering, and firing. When you rewind and replay from a previous state, you still need to fire, accelerate and steer at the right time, otherwise the whole point of rewinding is missed. Better proposals suggested to make other use of the replay data, e.g. to use this data to show a slow-motion shot of finishing the race.

Summary

Some general comments to wrap this up: Similar things can be said for almost any proposal. Generally proposals of people who have spoken to us received much higher rankings. Especially in the section of new proposals that was obvious - if it hadn't been for Hilnius's proposal, its average would have been much lower.

One frequently asked question or concern was that many students had no prior experience to open source development. We made a point of reading first the actual proposal before reading the background of students, but still in general people with more practical experience received better scores. Their proposals were just more complete, i.e. thought out every step of the way to the goal. For any student who might be interested in participating in a future GSoC I would strongly recommend to use the time till then to get more experience - and participating in Open Source would be a very good candidate. While I can't speak for all Open Source projects I know many will welcome new contributions, and will be happy to help anyone who wants to learn and is willing to put effort into this.



May 6, 2013

GSoC - Deadline for Student Proposal

May 3rd was the deadline for all proposals from students who wanted to participate in the 2013 Google Summer of Code. We received an amazing 79 proposals,which kept our mentors certainly very busy. A big thank you  to all our mentors for their tireless and patient work they did over the last two or three weeks. They were constantly answering questions, giving feedback, and helping students to get started. I have over 600 GSoC related emails during those three weeks (not counting those on the google list), and according to my log files talked to more than 30 people in private chats. It got quite frantic towards the end. Here the list of proposals per day:


In contrast to what we heard before the proposals mostly had a rather high quality, we didn't receive many 'spam' proposals (e.g. proposals completely unrelated to SuperTuxKart). Google had changed the number of proposals a student can submit from 20 to 5, and this might be responsible for this: less opportunity for students to 'spam' mentoring organisations with bad proposals (and since it appears we had an above average number of proposals we were even less likely to be targeted by those, since the chances for a bad proposal with us would be even worse).

Some of the proposals were extremely long and detailed (one proposal had 18 pages in an attached pdf file). Not that we expected that much detail in the proposals, but it shows how much effort some students put into their proposal. And most of them also reacted positively to feedback we gave them, so a compliment to the students at this stage as well.

Here some common problems we noticed so far:
  • Not having the right level of description: Stating that you are going to 'write a battle AI using some existing path finding algorithm' isn't really enough to tell us how your code is supposed to work. Path finding was only one part of that project, how does this work with all the other missing parts? On the other hand, a list of function names and parameters does not tell us how those functions are supposed to work together. That part is the important part, we don't need to know about function names here. 
  • Not understanding the project. Many people used encryption/signing to make sure that replay files saved from SuperTuxKart can not be altered, completely forgetting that it's trivial with an open source game to modify the data written before it is signed.
  • Untested patches. Some of the patches appeared to have not been tested, and did not work as expected. If you replace a printf warning message with our new Log::warn(...) interface, you should at least test that this warning works as expected, and not only if it compiles: do something to get this warning printed, and make sure it works as expected.
We are probably not able to provide individual feedback on all proposals, but I intend to write a follow up blog post detailing some of the common problems for certain ideas, and in general some other things we noticed. Hopefully we will be able to publish some of the better proposals as an example for everybody later.

It will be another few days before google lets us know how many slots (students) we are going to get. Then we will be busy for the next two weeks to read and rank 79 proposals, and pick the final selection of students. But one thing is already obvious: we have many more good proposal than we will get slots in the best case.