Thursday, January 24, 2019

Abstract Selection Process for APEX Connect 2019

I noticed that there is almost nothing published on the internet about how an abstract selection process works. Being the program manager for DOAG APEX Connect, I would like to share some insights about how this process works for this conference. Please be aware that all I say here is only applicable to DOAG APEX Connect 2019 and the process might change in the future. Also, other DOAG conferences may handle this process a bit different.

For APEX Connect 2019, we started our journey by opening Call for Papers. During this time, everybody was able to submit a paper through our website https://apex.doag.org. We encouraged everybody to read through our submission guidelines, as your abstract will be the basis for acceptance:

https://www.doag.org/en/home/news/eleven-steps-to-a-successful-submission-of-your-presentation/detail/

We received a stunning 171(!) abstracts during Call for Papers, but only 53 regular sessions could fit in the program schedule. Even though we strive to give each presenter only one session, there is no way we could accept each abstract. This is where the abstract selection process comes in. Each year, we ask a team of about 20 experts to rate all abstracts (from 1 to 5) and to provide a comment on their rating. These experts only see the title and abstract text without the presenter name nor the company he works for (anonymous voting). This prevents the rating from being biased by personal preferences, but also ensures that basically only great abstracts stand a chance to get accepted. After the rating, the actual selection took place. With APEX Connect, we have three streams: APEX, PL/SQL and JavaScript. Each stream was assigned a lead, who selects all abstracts for their stream up to a predefined number. These stream leaders have access to all information submitted by the presenter including presenter/company name and either accepts, rejects or puts the abstract on "standby". The list of abstracts in "standby" is issued whenever a scheduled presenter is unable to attend the conference due to some reason.

The criteria in which the selection takes place are:

1. Average rating

2. Check to see if the topic is already covered
To provide the best program, we wanted to cover as many relevant topics as possible, so most of them could only be covered by one presentation. One abstract even got obsolete during the selection process as Oracle Database Cloud Exadata Express got decommissioned by Oracle.

3. Number of sessions this presenter already has
For Connect, we strive to give only one presentation to each presenter. This is not always possible as ie. some well known presenters only become travel approval if they have at least two sessions.

4. Exception to the criteria above
There was only one case were somebody wrote a terrible abstract (and got rated as such), but after discussion with the whole group we still decided to accept the abstract due to its topic, but also due to the person itself.

After the selection process, the program manager verified ie. if a company didn't get too many sessions and discussed open issues with the stream leaders before scheduling all accepted sessions. The schedule was checked for the following:

1. Are the session rooms appropriate in size for the expected audience?

2. Is there at least one English session at all times?

3. Was a presenter accidentally scheduled twice at the same time?

A final check was done by the project manager, after which the program got discussed in a web conference by all volunteers involved in the rating process. After finalizing the conference schedule, everybody who submitted a paper got notified by e-mail and the schedule went online.

This whole process is valid for all 53 regular sessions. Next to that, we also have special sessions like keynote sessions, 1:1 sessions, beginner sessions, etc. These sessions are organized by the program committee.

Please let us know if you have any questions. We are always looking at ways to streamline the process, so we welcome any recommendations made as a comment on this blog post.

A big "thank you" goes out to all volunteers that were involved in the rating process to make this the best schedule ever! Also, I would like to take this opportunity to thank ODTUG and Jorge Rimblas for providing DOAG the abstract selection app based on APEX. You can find this great open source solution here: https://github.com/insum-labs/conference-manager

Last but not least: please visit APEX Connect as you will get the best conference schedule EVER! Convince yourself and have a look at our schedule at https://apex.doag.org/en/program and order your tickets today at https://apex.doag.org/en/tickets/tickets.

BTW, APEX Connect is not the only APEX conference out there. I can also recommend visiting the following great APEX conferences: NLOUG APEX World, APEX Alpe Adria and ODTUG Kscope.

CU in Bonn.

3 comments:

  1. Great post.
    I was wondering about the case of the terrible abstract.
    Was the person notified that acceptance was 'despite' rather than 'because'?
    Usually, the only 'feedback' on abstracts one gets is whether it's accepted or not. The latter usually by a standard mail saying something about great quality but unfortunately...
    So, the only indication a submitter has that an abstract may be substandard is that it never gets accepted. If it does get accepted 'despite' but without feedback, the opportunity to learn and improve is gone.

    In my opinion the abstract selection committees of such events could be more brutally honest in the acceptance, but especially the rejection emails. The only way to improve is to know you screwed up.

    ReplyDelete
    Replies
    1. Thanks for your comment. Yes, the involved presenter got informed. Generally, I agree with you, but it takes tremendous amount of time to give a reason why somebody got rejected. Also, you don't do this by e-mail as it may be misunderstood. Remember that we all do this work voluntarily. :)

      As said in the post, it is extremely important to read through the submission guidelines before submitting an abstract. That is the advice I can give you.

      Delete
  2. As a person who has been on both ends of the conference abstract submission/selection processs - as both submitter AND reviewer, I really like Erik’s suggestion. It would be good to offer some constructive feedback to the submitters when their abstract was not chosen.
    We reviewers often have hundreds of abstracts to wade through and often for much less spots than submissions. Obviously we can’t choose everyone. And sometimes it’s a very difficult decision to reject someone’s session - we often may receive multiple excellent submissions on the same topic. Which one to pick when they are all so good?

    I will take to heart the suggestion to let the submitters know why they were not accepted.

    ReplyDelete