AI usage policy

POLICY ON THE USE OF AI AND AI-SUPPORTED TECHNOLOGIES

1. General Provisions
1.1. The Editorial Board recognizes the potential of Artificial Intelligence (AI) as an auxiliary tool, but emphasizes the principle of “Human-in-the-loop”. This means that the final responsibility for interpreting social contexts, ethical evaluation, and scientific novelty rests exclusively with the human author.
1.2. This Policy applies to all stages of manuscript preparation: from data collection and analysis to text editing and visualization. The use of AI is permissible only when it does not undermine the originality and academic integrity of the work.
1.3. This Policy is based on the principles and recommendations of the following international organizations:

2. Author Status and Responsibility
2.1. Generative AI systems (LLMs) do not meet authorship criteria. Authorship requires the ability to interpret results, approve the final version of the manuscript, and bear legal responsibility for its content. Since AI lacks legal subjectivity, it cannot be listed as an author or co-author of a scientific publication.
2.2. Human authors bear personal responsibility for:

  • Data accuracy: Every fact, figure, or statement verified or generated with AI must be validated against primary sources.

  • Absence of plagiarism: Using AI-generated text without proper disclosure, or employing AI to paraphrase others’ works without citation, constitutes plagiarism.

  • Avoidance of “hallucinations”: References to non-existent sources or algorithmic errors do not exempt authors from responsibility for disseminating false information.

  • Bias prevention: Researchers must account for algorithmic bias toward certain social groups or gender diversity.

    2.3. Authors must critically evaluate all AI outputs and ensure that:

  • Theoretical conclusions and conceptual frameworks are the result of their own intellectual effort, not mere algorithmic compilation.

  • Ethical aspects of the research (especially involving human subjects) are personally analyzed by the author.

    2.4. Authors must not upload confidential data (e.g., unpublished interviews, respondents’ personal data) into generative AI systems, as this may lead to privacy violations and data leakage into training datasets.

3. Permitted and Prohibited Uses of AI
3.1. AI may be used only as a technical support tool in areas that do not involve the creation of new scientific knowledge, such as:

  • Language and stylistic editing (grammar, spelling, translation — subject to author verification of terminology).

  • Technical data processing (coding open data, formatting references, assisting with statistical code).

  • Search query structuring (keywords for scientometric databases).

  • Brainstorming (structuring ideas at the conceptual stage, without including AI-generated ideas as research results).

    3.2. Delegating intellectual product creation to AI is considered a violation of research ethics. Prohibited uses include:

  • Generating scientific hypotheses or conclusions.

  • Writing substantive parts of the text (introduction, literature review, results analysis, discussion).

  • Fabricating empirical data (surveys, interviews, transcripts, statistics).

  • Automatic paraphrasing of others’ works to bypass plagiarism checks.

  • Fact verification without consulting original sources (due to risk of fabricated references). AI outputs must never be treated as primary sources or authoritative evidence.

    3.3. Exceptions apply when AI itself, its algorithms, or outputs are the subject of research. In such cases, all generated materials must be clearly marked as quotations or appendices.

4. Visualization and Graphic Data
4.1. AI must not be used to generate graphs, charts, or maps based on fabricated or unverified datasets. All visualizations must cite the empirical data source.
4.2. Any image (infographic, conceptual model, reconstruction) created or significantly edited with AI (e.g., Midjourney, DALL-E, Canva AI) must include:

  • Tool name and version

  • Date of generation

  • Authorship of input parameters Example: “Fig. 1. Model of social interaction. Generated with Midjourney v.6.1 (accessed: 05.03.2026) based on author’s parameters.”

    4.3. AI must not be used to “enhance,” restore, or alter archival documents, photographs, archaeological artifacts, or other primary sources. Any AI processing (e.g., colorization, sharpening) must be declared as reconstruction, not original evidence.
    4.4. Authors must ensure that AI-generated illustrations do not infringe third-party copyrights and comply with the licensing policy of the chosen AI service.
    4.5. Authors must retain datasets and prompts used for generation. Editors or reviewers may request these for verification.

5. Disclosure Requirements
5.1. Authors must openly declare AI use at any stage of manuscript preparation. Hidden use is considered academic misconduct.
5.2. Depending on purpose, disclosure must appear:

  • In Introduction or Methodology if AI was part of research design or data analysis.

  • In a dedicated AI Declaration section (before References) if AI was used for editing, translation, or technical support.

    5.3. Example declaration: “During manuscript preparation, the authors used [TOOL NAME/VERSION] for [PURPOSE: e.g., stylistic editing of English text / code generation for data analysis]. After using the tool, the authors carefully reviewed and edited the content and take full responsibility for the final publication.” 5.4. The Editorial Board may request prompts or AI dialogues for verification. 5.5. Standard tools with integrated AI functions that do not generate substantive content (e.g., spell-check in MS Word, Grammarly in correctness-only mode, citation managers) do not require disclosure.

6. Policy for Reviewers and Editors
6.1. Reviewers and editors must not upload manuscripts (or fragments) into generative AI systems for analysis, review writing, or text checking. This violates confidentiality and copyright.
6.2. Peer review is an expert evaluation based on human judgment. AI cannot adequately assess novelty or theoretical depth.
6.3. Editors may use AI-detection software to screen manuscripts. Results are indicative only and require further expert review.
6.4. Editorial Board members may use AI only for technical tasks (e.g., formatting references, translating correspondence), excluding personal data or manuscript content.
6.5. Reviewers found to have used AI for evaluation will be removed from the reviewer database and their reviews annulled.

7. Consequences of Violations and Appeals
7.1. Undeclared AI use (text generation, idea fabrication, data falsification) is treated as academic misconduct equivalent to plagiarism or fabrication.
7.2. If detected during review:

  • Editors may request explanations and prompts.

  • Confirmed violations result in rejection without resubmission rights.

  • Institutions may be notified of ethical breaches.

    7.3. If detected post-publication, the article will be retracted according to COPE protocols. The journal must also declare which AI tools it uses for manuscript screening.
    7.4. Authors may appeal within 14 days, providing evidence of authenticity (drafts, Word revision history, interview records, real archival sources). Independent experts may be consulted.
    7.5. Authors suspecting AI-generated reviews may request verification.
    7.6. Editorial Board decisions after appeal are final.