Stanford Seminar - Evaluating and Designing Computing Systems for the Future of Work

()
Stanford Seminar - Evaluating and Designing Computing Systems for the Future of Work

Research Focus

  • The research focuses on workplace technologies, leveraging empirical evidence, technical methods, and design principles.
  • The speaker proposes studying computing technologies as a science of the artificial, inspired by Herbert Simon's work.

Multitasking in Remote Meetings

  • A study analyzed multitasking behaviors during remote meetings using telemetry data and a survey.
  • Multitasking was common in online meetings, with around 30% involving email multitasking and 25% involving file multitasking.
  • Multitasking was more likely to occur in larger meetings, longer meetings, morning meetings, and recurring meetings.
  • Multitasking was associated with lower meeting relevance or engagement.
  • Best practices for running effective remote meetings were suggested, such as avoiding important meetings in the morning, reducing unnecessary meetings, shortening meeting durations, inserting breaks, encouraging active contribution, and allowing space for positive multitasking.

Team Viability Classification

  • A study was conducted to classify and predict team liability using team chat interactions.
  • The study aimed to understand the mechanisms behind team viability and identify behaviors associated with high versus low viability teams.
  • It was found that it was possible to classify high versus low viability teams using team chat interactions.
  • Certain behaviors were strongly associated with high versus low viability teams, such as the use of positive language, coordination, and task-related communication.
  • Team viability can be classified with high accuracy using machine learning algorithms.
  • Team viability can be predicted after only 70 seconds of initial interaction.

AI-Generated Scientific Feedback

  • Large language models can provide useful scientific feedback on research papers.
  • A retrospective evaluation showed that AI-generated feedback significantly overlaps with human feedback, especially for weaker or rejected papers.
  • A user study found that more than half of the researchers found LLM feedback helpful or very helpful, and over 80% found it more beneficial than feedback from at least some human reviewers.

Ongoing and Future Work

  • Ongoing and future work includes quantifying the effects of AI usage on work, designing and evaluating workplace applications that incorporate AI, and developing human-generative AI interaction guidelines.

Acknowledgements

  • The speaker expresses gratitude to their advisors, committee members, collaborators, and funding sources for supporting their research and making their PhD journey possible.

Overwhelmed by Endless Content?