https://fllm-conference.org/FLLM2023/index.php
Abu Dhabi, UAE. November 22-23, 2023.
(Co-located with SNAMS <https://emergingtechnet.org/SNAMS2023/index.php>and
Technically Co-Sponsored by IEEE)
*FLLM Call for Papers:*
With the emergence of foundation models (FMs) and Large Language Models
(LLMs) that are trained on large amounts of data at scale and adaptable to
a wide range of downstream applications, Artificial intelligence is
experiencing a paradigm revolution. BERT, T5, ChatGPT, GPT-4, Falcon 180B,
Codex, DALL-E, Whisper, and CLIP are now the foundation for new
applications ranging from computer vision to protein sequence study and
from speech recognition to coding. Earlier models had a reputation of
starting from scratch with each new challenge. The capacity to experiment
with, examine, and comprehend the capabilities and potentials of
next-generation FMs is critical to undertaking this research and guiding
its path. Nevertheless, these models are currently inaccessible as the
resources required to train these models are highly concentrated in
industry, and even the assets (data, code) required to replicate their
training are frequently not released due to their demand in the real-time
industry. At the moment, mostly large tech companies such as OpenAI,
Google, Facebook, and Baidu can afford to construct FMs and LLMS. Despite
the expected widely publicized use of FMs and LLMS, we still lack a
comprehensive knowledge of how they operate, why they underperform, and
what they are even capable of because of their emerging global qualities.
To deal with these problems, we believe that much critical research on FMs
and LLMS would necessitate extensive multidisciplinary collaboration, given
their essentially social and technical structure.
The International Symposium on Foundation and Large Language Models (FLLM)
addresses the architectures, applications, challenges, approaches, and
future directions. We invite the submission of original papers on all
topics related to FLLMs, with special interest in but not limited to:
- *Architectures and Systems*
- Transformers and Attention
- Bidirectional Encoding
- Autoregressive Models
- Massive GPU Systems
- Prompt Engineering
- Fine-tuning
- *Challenges*
- Hallucination
- Cost of Creation and Training
- Energy and Sustainability Issues
- integration
- Safety and Trustworthiness
- Interpretability
- Fairness
- Social Impact
- *Future Directions*
- Generative AI
- Explainability
- Federated Learning for FLLM
- Data Augmentation
- Natural Language Processing Applications
- Generation
- Summarization
- Rewrite
- Search
- Question Answering
- Language Comprehension and Complex Reasoning
- Clustering and Classification
- *Applications*
- Natural Language Processing
- Communication Systems
- Security and Privacy
- Image Processing and Computer Vision
- Life Sciences
- Financial Systems
*Submission Guidelines:*
Papers submitted to FLLM must be the original work of the authors. They may
not be simultaneously under review elsewhere. Publications that have been
peer-reviewed and have appeared at other conferences or workshops may not
be submitted to FLLM. Authors should be aware that IEEE has a strict policy
with regard to plagiarism
https://www.ieee.org/publications/rights/plagiarism/plagiarism-faq.html The
authors' prior work must be cited appropriately.
All papers that are accepted, registered, and presented in FLLM will be
submitted to IEEEXplore for publication with SNAMS proceedings.
*Important Dates*
- *Paper Submission Deadline: 28-October-2023 (Firm and Final) *
- Notification of acceptance: 8-November -2023
- Camera-ready Submission: 15-November-2023
Please send any questions about FLLM to Yaser Jararweh:
yijararweh@just.edu.jo
**********************************************************
*
* Contributions to be spread via DMANET are submitted to
*
* DMANET@zpr.uni-koeln.de
*
* Replies to a message carried on DMANET should NOT be
* addressed to DMANET but to the original sender. The
* original sender, however, is invited to prepare an
* update of the replies received and to communicate it
* via DMANET.
*
* DISCRETE MATHEMATICS AND ALGORITHMS NETWORK (DMANET)
* http://www.zaik.uni-koeln.de/AFS/publications/dmanet/
*
**********************************************************