Abstract
Whilst linguistic research on speakers of minority genders has increased in the past decade, much less is known about how they can best be included in broader (socio)linguistic research. The current paper compares the way a range of five different gender measures for survey research are filled out and evaluated by a sample of LGBTQ+ people (N = 682). It finds that providing a larger range of answering options allows researchers to gain a better view of the gender diversity in their sample, whilst preventing refusals and loss of participants. The gender question that was least likely to be refused and was rated the most accurate, most inclusive, and clearest was a six-option multiple-choice question which included a “prefer not to say” and a write-in option. This question reconciles two competing interests in the treatment of queer data: it explicitly recognizes and names minority genders and simultaneously carves out space for participants to refuse categorization or write out gender identities beyond those preset by the researcher.
1 Introduction
Linguistic research on language and gender has increasingly addressed the speech of people outside the male-female binary, for example by studying the speech of non-binary speakers (e.g. Becker et al. 2022; Calder 2021; Gratton 2016; Jones 2022; Schmid and Bradley 2019; Steele 2019). However, much less has been written about how to adapt methodological approaches to wider linguistic research in order to be inclusive and accurate for participants with minority genders. This is relevant in participant-based studies which aim to report on the gender representativeness of a sample, and especially relevant to sociolinguistic work that uses gender as a predictor. For example, in a sociophonetic setting, miles-hercules and Zimman (2019) find that automated measurements of vowel formants can be influenced by the gender label that researchers assign to non-binary voices in an interview. Thus the selection of gender labels is able to influence how data are interpreted and read, highlighting the importance of precision and accuracy in the use of those labels.
Furthermore, as Guyan (2021) points out in his sociological work on queer data, studies which do not include (enough) options for minority genders may “design out” potential survey participants in a systematic way. This is undesirable for two reasons:
From a data-analytical perspective, a part of participant responses may be inaccurate if not enough options are given (Guyan 2022), for example non-binary participants may indicate a binary gender even when they do not have that gender.
From an ethical point of view, the lack of inclusion of minority genders excludes and misrepresents people of those genders in ways that may harm them (cf. Taylor et al. 2019).
For example, as Taylor et al. (2019) report, the erasure of minority gender identities often plays a central role in the experiences of discrimination that non-binary individuals face. In two focus groups of non-binary users of a UK National Health Service gender identity clinic, they found that one of the main themes in the participants’ experiences was invisibility, which participants reported led to them being dismissed, erased, and misunderstood (Taylor et al. 2019: 198–199). Including non-binary identities in survey questions allows researchers to make these identities visible and carve out more space for minority genders within the scope of survey research, whilst also allowing for more insight in the language of speakers with minority genders.
Finally, addressing these concerns by better including minority genders in survey research will allow for sociolinguistic work to address language and gender research in a more refined and accurate way. As Zimman (2020) argues, research on trans and non-binary speakers is able to engage in a new way with themes of linguistic innovation, agency, and cognition in the context of language and gender, challenging the discipline’s knowledge and creating new space for the knowledge of marginalized communities. Using more accurate measurements which allow for the recognition and inclusion of non-binary speakers across sociolinguistic studies may be able to ensure that such perspectives are more likely to be detected even when the object of the study is not specifically the speech of non-binary speakers.
However, most of the research that has sought to address how to include non-binary identities into demographic questions about gender has aimed to simply provide new guidelines for such questions (e.g. Cameron and Stinson 2019; Ruberg and Ruelos 2020; Spiel et al. 2019; Vincent 2018), whilst only a smaller number of studies (Bauer et al. 2017; Lindqvist et al. 2021; Medeiros et al. 2020) have investigated the effectiveness of and attitudes towards these questions empirically (thus far, none of these are in the field of linguistics). Even then, the findings from those studies are somewhat limited: whilst they find that asking gender questions which go beyond a male-female binary does not cause any issues, the solutions they propose only form a limited range of options and have their own potential issues with inclusivity and accuracy. For example, the third gender option proposed by Medeiros et al. (2020), “other”, treats all minority genders as identical, and Bauer et al.’s (2017) proposed gender questions focus on sex assigned at birth in a way that is not always appropriate, especially outside of healthcare settings. Furthermore, the proposed new gender questions have not yet been compared to each other.
The current paper addresses this gap in the field by comparing five different gender questions for survey research with different response options in a sample of LGBTQ+ participants, focusing specifically on transgender, non-binary, and genderqueer participants. We present data on the gender diversity that is lost when gender questions present too few response options, the degree to which LGBTQ+ participants (and trans, non-binary, and gender non-conforming participants in particular) self-report refusing to answer those questions, and the way these participants evaluate the different questions.
Below we review the previous approaches that have been taken in empirical research on non-binary gender questions. We address the potential tension between their respective advantages and disadvantages, including concerns over essentialism and invisibility. We then test five possible ways of asking about gender, as detailed in the methodology and results. We end with some recommendations based on our findings.
2 Previous approaches to non-binary gender questions
2.1 Including an “other” option
One of the principal existing studies to investigate the inclusion of further genders beyond male and female in gender questions is that of Medeiros et al. (2020). This study’s main aim was to test whether including a third gender option “other” would be confusing or undesirable to nationally representative samples of the population in Canada, the US, and Sweden. They found that participants did not differ in survey satisfaction rates based on the inclusion of an “other” option. They posited that the addition of non-binary options at the very least provides data on a vulnerable population without posing any adverse reactions to the survey itself. This was also found in an exploration of a non-binary sex question for the census in Scotland (National Records of Scotland 2021), where the question included the option “in another way” (as opposed to identifying as a man or a woman), which did not negatively affect response rates.
One major drawback of the approach taken by Medeiros et al. (2020) is that by framing the third gender option as “other” compared to the binary options of “male” and “female”, it frames those minority gender identities as essentially and most importantly in opposition to binary genders, grouping all other genders as one and the same. This reinforces the situation in which non-binary individuals must always contend with the domination and centring of the notion of a gender binary (cf. Taylor et al. 2019; but also Dembroff 2020). Furthermore, the phrasing “other” quite literally others participants with minority genders (English 2022). Finally, the singular “other” option has a particularly high risk of erasing minority genders outside of Western contexts: minority genders such as hijra (see e.g. Goel 2016), two-spirit people (Robinson 2020), and travestis (Jarrín 2016) have different histories, contexts, and meanings that are more specific than “anything outside the binary”.
2.2 The two-step approach
Bauer et al. (2017) focused on Canadian LGBT+ people’s own responses to questions about gender and trans status for health-based population surveys. They explored how different sets of questions were evaluated within gender-diverse groups, whilst separating questions on sex assigned at birth from questions on gender identity. They found that non-responses were rare for such questions (only one out of 311 survey respondents) and that comprehension was good. They recommended a specific two-step question with two options for sex assigned at birth – “male” or “female” – and four options for gender identity – “male”, “female”, “indigenous or cultural minority identity (e.g. two-spirit)”, or “something else (e.g. gender fluid, non-binary)”.
Still, trans populations in Bauer et al.’s (2017) study run the risk of misclassification, as a number of trans respondents did answer that their gender identity is in line with their sex assigned at birth. This suggests that asking for participants’ sex assigned at birth is not an effective way to determine whether a participant is trans. For example, some people who have medically transitioned do not see themselves as having a differentiation between their sex and their gender (Bauer et al. 2017: 15). This may also be the case for many trans people who have not undertaken medical steps to transition. In these cases it may be better to ask directly about trans status and leave questions about sex assigned at birth to more specialist healthcare settings. Furthermore, the term “sex assigned at birth” may simply not be a familiar term to all participants.
2.3 Open questions
To address the issues with potential erasure and misclassification discussed above, Cameron and Stinson (2019) proposed leaving the categorization up to participants themselves: they argued that the best way to ask participants about their genders is to ask an open question using a text box into which participants type their answers. The open text box option offers complete autonomy and control for participants to name their gender identity, rather than having to pick an option that is “close enough”. Whilst such text box data may seem difficult to treat, Lindqvist et al. (2021) found that most responses to an open gender question are easy to code (99 % of cases in their sample of 794 US participants in the general population were entries like “male”, “female”, “f”, “m”, “woman”, and “man”).
The open question may also have its drawbacks however, as Bauer et al. (2017) argued: the approach leaves space open for ambiguity in a way that may not always be easy to resolve when it comes to queer data. Whilst combining the answers “male”, “man”, “Man”, and “just a man” may be low-stakes, it becomes more difficult to decide whether the answer “trans man” should be categorized within that same category, or whether this participant specifically wanted to stress the trans aspect of his gendered experiences and wanted to be classified separately. Guyan (2022: 129) pointed out that what may be perceived as an “error” in data cleaning, may in fact be a conscious subversion of the rules and expectations of data collection practices. For example, if one of the participant responses to a gender question is the entry “gay”, this could be read as an error (with the participant misinterpreting the question as a sexuality rather than a gender question). However the participant may have intentionally given this response to say their gendered experiences are best captured as being read as gay and this matching their internal sense of gender.
The open question proposal has not yet been tested evaluatively by means of a demographic survey, although English (forthcoming) found that in focus groups many queer people report that they would find open questions the most desirable.
2.4 De-essentializing and essentializing minority genders
All efforts to categorize and ask about gender as a demographic measure encounter the problem that they essentialize the gender identities involved in the question. This runs counter to a shift within queer linguistics where researchers are moving away from treating LGBTQ+ speakers as essentialized or as a homogeneous group (see e.g. Jones 2021). Rather, current research tends to shed light on the complex nature of queer identities and how they are negotiated in different contexts (Barrett 2017; Borba 2019; Jones 2018). This movement speaks to a tension in the collection of queer data that Guyan (2022: 127) has described: researchers have to balance being precise about the nuances, complexities, and porous boundaries of different identities and experiences with the sometimes strategic use of essentializing terms or identities to pursue policy goals and mobilize action. The latter, coined “strategic essentialism” by Spivak (1988), constitutes a temporary presentation of a group as having a shared essence to pursue specific goals (e.g. in this case making visible and normalizing genders outside a male-female binary by introducing a “non-binary” option), rather than a universal conceptualization of this group as having a shared essence (e.g. many second-wave feminist approaches establishing women as different from men based on a shared cultural essence; e.g. Bucholtz 2014: 31). As Guyan (2022: 128) pointed out, strategic essentialism has its pitfalls in that it erases differences within marginalized groups – especially for those within the groups which are already most likely to be erased.
In the case of survey research, the explicit mention and labelling of a minority gender category like “non-binary” carves out space for participants for whom this label fits well, countering some of the invisibility that non-binary participants may otherwise face. At the same time, it may erase the diversity of minority genders beyond this label, which open questions would be less likely to do. To be more fully inclusive of participants of minority genders, survey research needs to take both interests into account.
2.5 Research questions
Whilst it has been shown that improving gender questions by adding response options beyond the gender binary does not hinder participant comprehension (Bauer et al. 2017; Medeiros et al. 2020), alternative questions and response options have rarely been tested, and even less so in comparison to each other. The current paper compares a range of five gender questions, including those proposed in these previous studies, by asking the following research questions:
How well do different gender questions represent gender diversity among LGBTQ+ participants? (insight into diversity)
Which questions are LGBTQ+ participants most likely to refuse? Which ones are they most likely to answer? (refusal rates)
How are the different gender questions evaluated in terms of their accuracy, inclusivity, and clarity? (evaluation)
For the first two questions we focused on data from all LGBTQ+ participants, in order to map gender diversity across questions, as well as refusal rates for the wider community. This was to ensure no gender diversity was missed for participants who do not use the label “trans”, as well as to assess if cisgender participants refused to answer any questions they perceived as less inclusive. For the issue of evaluation, we centred non-cisgender perspectives. This was to ensure the appropriate weight was given to their experiences taking into consideration the frequent marginalization of their knowledge (see Fricker and Jenkins 2017 on epistemic injustice harming trans individuals as knowers). It is also for this reason that no cisgender heterosexual participants were invited to the survey; note that Medeiros et al. (2020) found that for general populations the inclusion of a non-binary option in gender questions was not a problem. It is of course possible that this depends on the elaborateness of the question, and may differ if the range of responses is more complex than that proposed by Medeiros et al. (2020), but this was beyond the scope of the current paper.
3 Methodology
3.1 Survey questions
The survey used was built in Qualtrics and approved by the ethics committees at Utrecht University and the University of Edinburgh. Participants were asked to fill out five gender questions with different response options as they would if they were presented with one in a different research survey. The questions that were tested, together with the response options provided, are given in Table 1.
Survey questions and the responses that were provided.
| Question | Responses | |
|---|---|---|
| 1 | Two options | |
| What is your gender? | Male | |
| Female | ||
| 2 | Three options; following Medeiros et al. (2020) | |
| What is your gender? | Male | |
| Female | ||
| Other | ||
| 3 | Six options | |
| What is your gender? | Man | |
| Woman | ||
| Non-binary | ||
| Indigenous or cultural minority identity (e.g. two-spirit) | ||
| Prefer not to say | ||
| Other, please specify: [Open text box] | ||
| 4 | Two-step question; following Bauer et al. (2017) | |
| What is your sex assigned at birth, meaning on your original birth certificate? | Male | |
| Female | ||
| What is your current gender identity? | Male | |
| Female | ||
| Indigenous or cultural minority identity (e.g. two-spirit) | ||
| Something else (e.g. gender fluid, non-binary) | ||
| 5 | Open question; following Cameron and Stinson (2019) | |
| What is your gender? | [Open text box] |
For each of the multiple-choice questions, an option was added where participants could indicate that they “would refuse to answer this question and leave a survey that included it”. This was then coded in the analysis as a refusal. It was also possible to simply not select any option, which was coded as a non-response. For the open question, participants were instructed to not write anything in the text box if they wanted to indicate they would refuse to answer this question. The five questions were presented in randomized order. After each question participants were asked to evaluate the relevant question by indicating on three 5-point Likert scales the degree to which they found the question clear, inclusive, and accurate to themselves.
Finally, participants were asked some demographic questions, including about trans status (“cisgender”, “transgender”, “neither”, or “unsure”), sexuality, country of residence, age, university education, and whether they considered themselves a person of colour.
3.2 Distribution and sample
The survey was distributed through our own personal networks and a range of LGBTQ+ networks in the United States, the United Kingdom, and the Netherlands, as well as through social media. It was advertised to anyone with an LGBTQ+ identity, rather than just trans and non-binary people, in order to include any participants who feel they do not quite fit the gender binary but may not use the trans and/or non-binary labels.
As is shown in Table 2, a total of 682 participants filled out the survey, 233 from the Netherlands, 224 from the UK, 172 from the US, and 47 from other countries; three participants did not indicate their location. Of the sample, 242 participants were cisgender, 271 were transgender, 95 were neither, and 68 were unsure; three participants did not indicate their trans status.
Participant sample by trans status and location.
| Netherlands | United Kingdom | United States | A different country | Total | |
|---|---|---|---|---|---|
| Cisgender | 87 | 86 | 56 | 13 | 242 |
| Transgender | 83 | 95 | 76 | 17 | 271 |
| Neither | 26 | 29 | 30 | 10 | 95 |
| Unsure | 37 | 14 | 10 | 7 | 68 |
| Total | 233 | 224 | 172 | 47 | 682 |
-
Note: The grand total of 682 includes the six participants who did not respond to one of these demographic questions; they are excluded from the rest of the table.
The sample was less balanced for university education and race. Only 60 participants were not attending and had never attended university, and only 47 participants considered themselves a person of colour, with 595 participants indicating they did not, 30 participants indicating the term “person of colour” did not adequately describe them, 6 participants selecting “other”, and 4 participants not answering the question. Young people made up the largest share of the participants, as can be seen in the age distribution given in Figure 1.

Age distribution of the sample.
4 Results
4.1 Overview
The results of the survey questions showed that the inclusion of more than two gender options greatly increased accuracy to the diversity of gender identities within LGBTQ+ populations and reduced the number of participants refusing to partake in a survey or giving non-responses. Larger numbers of response options also substantially improved participants’ evaluations of the questions. The highest rated and least refused gender question was the six-option multiple-choice question.
4.2 Participant diversity and question refusal
4.2.1 Diversity
The sample of LGBTQ+ participants in our study showed a much wider range than just two gender labels. Increasing the number of response options allowed for this population’s gender diversity to be more accurately represented. This is summarized in Figure 2, which shows the answers given across the multiple-choice questions with two, three, and six response options available. As can be seen in the figure, the two-option multiple-choice question was met with a high number of refusals and non-responses (38 %), whilst a large group of participants indicated “other” in the three-option question (37 %). In the six-option question, further diversity came to light: 34 % of respondents selected “non-binary” as their gender, 0.4 % selected a culturally specific minority gender, and 9 % of respondents made use of the write-in option “other, please specify”. Generally speaking, these respondents were the same ones that refused to answer the binary question. However, the number of people indicating one of the two binary genders went down from 62 % in the binary question to 54 % in the three-option question and to 50 % in the six-option question, which implies that some participants who may not have refused a two-option question actually did have a gender identity that was better represented by one of the other response options offered. This suggests that studies which work with binary gender questions do not just lose participants, but also gather data which is inaccurate to participants’ genders.

Responses to gender questions with an increasing number of multiple-choice questions.
When participants were given the option to answer the open question with no preset responses, we see even more diversity. Figure 3 shows the range of repeated answers after removing all spaces, punctuation, and capitalization. Three things stand out in the figure. First, a considerable part of the sample gave a unique answer which no other participants gave (16 %). Second, many participants gave a combination of multiple gender markers (e.g. “non-binary/male”). Both highlight the possible complexity and nuance of participants’ identities beyond one-word labels. Third, when given the open question, many participants included information about their trans status (e.g. “trans man”, “cis female”). This suggests that participants were unsure about which types of gendered information the researchers were interested in. This also comes to the fore in participants’ evaluations of the open question, which was ranked relatively low for clarity (see Figure 7).

Recurring answers to the open gender question, with no preset responses. The total number of unique answers is shown to the right of the graph, as well as the number of empty responses.
4.2.2 Question refusal
As can be seen in Figure 4, a considerable number of participants refused to answer some of the gender questions, depending on which response options were given. This was especially true when only two options were given (38 % refusals and non-responses), or when participants were asked about their sex assigned at birth in the two-step question (22 % refusals and non-responses). The open question received a relatively large number of non-responses, despite leaving the classification of one’s gender up to the participants themselves entirely (11 %, which is higher than the more limited three-option question, which was refused or not responded to by 10 % of participants). The six-option question was refused or not responded to by only 3 % of participants, whilst 4 % selected the option “prefer not to say”. Although the latter option meant that no information about a participant’s gender was given, it meant participants were still willing to continue a survey or experiment which included this question (which was not the case for the other refusals). No respondents refused all questions.

Response refusals across different question types.
4.3 Question evaluations
To investigate how the different gender questions were rated we focused on non-cisgender participants’ evaluations (i.e. those who indicated being transgender, neither cis nor trans, or those who were unsure). Participants rated these questions for three parameters: accuracy, inclusivity, and clarity. We tested the differences by means of Kruskall-Wallis tests and tested comparisons between groups by means of Benjamini-Hochberg adjusted Dunn’s tests.
The ranked evaluations for accuracy are shown in Figure 5, created using the likert() function in the HH package in RStudio (Heiberger and Robbins 2014). As can be seen here, the most highly rated question type for accuracy is the six-option multiple-choice question, followed by the open question. These were both rated very positively. Then, the two-step question was perceived somewhat more neutrally with some high and some low evaluations. The three-option multiple-choice question was rated quite negatively, although not as negatively as the binary multiple-choice question, which was rated mostly negatively. The differences in ratings were highly significant (χ2 = 999.41, p < 0.001), with Benjamini-Hochberg adjusted Dunn’s tests showing all comparisons being significant. Most comparisons were significant at the level of p < 0.001, with the exception of the one between the six-option question and the open question, which was significant at the level of p = 0.017 (below the Holm-Bonferroni corrected α = 0.025).

Non-cisgender participants’ Likert scale ratings for accuracy evaluations (ranked).
For the inclusivity evaluations, the ranked options are identical to the accuracy rankings, with the six-option question being rated most positively and the binary question most negatively. Here, the differences in ratings were also highly significant (χ2 = 1,232, p < 0.001), with Benjamini-Hochberg adjusted Dunn’s tests showing all comparisons being significant at the level of p < 0.001, other than the comparison between the six-option question and the open question, which was significant at the level of p = 0.036 (below the Holm-Bonferroni corrected α = 0.05). The ranked evaluations are shown in Figure 6.

Non-cisgender participants’ Likert scale ratings for inclusivity evaluations (ranked).
As can be seen in Figure 7, patterns for the clarity-based evaluations differed in two ways from the inclusivity- and accuracy-based ones: the open question was rated lower than the two-step question, and the ratings for the two-option and three-option questions were much less overwhelmingly negative than those for inclusivity and accuracy. Still, ratings all differed significantly between the question types (χ2 = 318.08, p < 0.001), with Benjamini-Hochberg adjusted Dunn’s tests showing all comparisons being significant at the level of p < 0.001.

Non-cisgender participants’ Likert scale ratings for clarity evaluations (ranked).
5 Recommendations
The current study has presented evidence that the most desirable way to ask about gender in the US, the UK, and the Netherlands is to use a six-option multiple-choice question with the following options:
man
woman
non-binary
indigenous or cultural minority identity (e.g. two-spirit)
other, please specify: [text box]
prefer not to say
This allowed LGBTQ+ participants to much more accurately describe their own genders than in the binary, three-option, or two-step multiple-choice questions, and meant participants were least likely to refuse a survey or to not respond to a question. The six-option question was deemed to be much clearer than an open question, where participants were not always sure which exact aspects of their gender, gender history, and gender experience they were being asked about.
One important advantage of the six-option question is that it includes a write-in option. This way, it carves out space for participants who face the greatest potential alienation from the specific structure of a gender survey question (e.g. participants whose gender is not captured by any listed options, participants who may resist gender categorization altogether, and participants who value specifying the description of the nuance and complexity of their gender).[1] This allows the question to list preset minority gender categories, like “non-binary”, explicitly recognizing this minority gender label, without treating it as wholly exhaustive. It balances the strategic essentializing of minority genders (Guyan 2021; Spivak 1988) for the purpose of making this gender category visible, with the possibility for participants to specify a more complex or nuanced gender identity or to resist categorization. This means minority genders which fall outside of the scope of “non-binary” are able to be recognized and made visible – as the most likely group to face disadvantage from the essentializing.
As our findings are based on a sample that is mostly based in Western, white-majority countries and based mostly on highly educated participants, they are not universally applicable. However, for similar samples, they offer a potential blueprint for gender questions in survey research. The recommended six-option multiple-choice question builds on and improves work by Medeiros et al. (2020) and Bauer et al. (2017), who provided some of the first evidence that more inclusive gender questions are possible, and it incorporates calls to include open text box answering options (Cameron and Stinson 2019; English 2022), whilst still providing the clarifying framework of a multiple-choice question. Using the six-option multiple-choice question has the advantage of providing more accurate data, being more inclusive to trans, non-binary, and gender non-conforming participants, being clearer, and more generally preventing the loss of representativeness that follows from some LGBTQ+ participants refusing to take part in surveys.
Finally, the use of this recommended gender measure may be able to strengthen sociolinguistic research and research on language and gender more broadly by ensuring that participants of minority genders are not missed. As Zimman (2020: 15) argued, the perspective of trans, non-binary, and gender non-conforming language users pushes linguists to look beyond deterministic and homogenizing accounts of social practice to consider how language users may exceed normative categories, act in their margins, or travel between them. Using an accurate and inclusive gender measure ensures the potential insights from participants of minority genders for wider language and gender research are not missed, and it recognizes their gendered experiences and practices as equal, rather than erasing, misrepresenting, or knowingly or unknowingly excluding them.
Acknowledgments
We thank the anonymous peer reviewers for their feedback, which significantly improved the quality of the paper. We also thank Kirstie Ken English and Eduardo Alves Vieira for their comments on earlier versions of the paper. Finally, we thank our participants for their enthusiastic participation in this study.
References
Barrett, Rusty. 2017. From drag queens to leathermen: Language, gender, and gay male subcultures. Oxford: Oxford University Press.10.1093/acprof:oso/9780195390179.003.0001Search in Google Scholar
Bauer, Greta, Jessica Braimoh, Ayden Scheim & Christoffer Dharma. 2017. Transgender-inclusive measures of sex/gender for population surveys: Mixed-methods evaluation and recommendations. PLoS One 12(5). e0178043. https://doi.org/10.1371/journal.pone.0178043.Search in Google Scholar
Becker, Kara, Sameer ud Dowla Khan & Lal Zimman. 2022. Beyond binary gender: Creaky voice, gender, and the variationist enterprise. Language Variation and Change 34(2). 215–238. https://doi.org/10.1017/s0954394522000138.Search in Google Scholar
Borba, Rodrigo. 2019. The interactional making of a “true transsexual”: Language and (dis)identification in trans-specific healthcare. International Journal of the Sociology of Language 2019(256). 21–55. https://doi.org/10.1515/ijsl-2018-2011.Search in Google Scholar
Bucholtz, Mary. 2014. The feminist foundations of language, gender, and sexuality research. In Susan Ehrlich, Miriam Meyerhoff & Janet Holmes (eds.), The handbook of language, gender, and sexuality, 23–47. Chichester: Wiley.10.1002/9781118584248.ch1Search in Google Scholar
Calder, J. 2021. Whose indexical field is it? The role of community epistemology in indexing social meaning. Texas Linguistics Society 39. 39–55.Search in Google Scholar
Cameron, Jessica & Danu Anthony Stinson. 2019. Gender (mis)measurement: Guidelines for respecting gender diversity in psychological research. Social and Personality Psychology Compass 13(11). e12506. https://doi.org/10.1111/spc3.12506.Search in Google Scholar
Dembroff, Robin. 2020. Beyond binary: Genderqueer as critical gender kind. Philosophers’ Imprint 20(9). 1–23.Search in Google Scholar
English, Kirstie Ken. 2022. T.E.M.P.S. question design standards. Available at: https://kenglish95.github.io/posts/2022/06/TEMPS.Search in Google Scholar
English, Kirstie Ken. forthcoming. How should differences of sex, gender and sexuality be represented by UK population surveys? Glasgow: University of Glasgow PhD thesis.Search in Google Scholar
Fricker, Miranda & Katharine Jenkins. 2017. Epistemic injustice, ignorance, and trans experiences. In Ann Garry, Serene Khader & Alison Stone (eds.), The Routledge companion to feminist philosophy, 268–278. New York: Routledge.10.4324/9781315758152-23Search in Google Scholar
Goel, Ina. 2016. Hijra communities of Delhi. Sexualities 19(5–6). 535–546. https://doi.org/10.1177/1363460715616946.Search in Google Scholar
Gratton, Chantal. 2016. Resisting the gender binary: The use of (ING) in the construction of non-binary transgender identities. University of Pennsylvania Working Papers in Linguistics 22(2). 51–60.Search in Google Scholar
Guyan, Kevin. 2021. Constructing a queer population? Asking about sexual orientation in Scotland’s 2022 census. Journal of Gender Studies 31(6). 782–792. https://doi.org/10.1080/09589236.2020.1866513.Search in Google Scholar
Guyan, Kevin. 2022. Queer data: Using gender, sex and sexuality data for action. London: Bloomsbury.10.5040/9781350230767Search in Google Scholar
Heiberger, Richard & Naomi Robbins. 2014. Design of diverging stacked bar charts for Likert scales and other applications. Journal of Statistical Software 57. 1–32. https://doi.org/10.18637/jss.v057.i05.Search in Google Scholar
Jarrín, Alvaro. 2016. Untranslatable subjects: Travesti access to public health care in Brazil. TSQ: Transgender Studies Quarterly 3(3–4). 357–375. https://doi.org/10.1215/23289252-3545095.Search in Google Scholar
Jones, Jacq. 2022. Authentic self, incongruent acoustics: A corpus-based sociophonetic analysis of nonbinary speech. Christchurch: University of Canterbury PhD thesis.Search in Google Scholar
Jones, Lucy. 2018. “I’m not proud, I’m just gay”: Lesbian and gay youths’ discursive negotiation of otherness. Journal of Sociolinguistics 22(1). 55–76. https://doi.org/10.1111/josl.12271.Search in Google Scholar
Jones, Lucy. 2021. Queer linguistics and identity: The past decade. Journal of Language and Sexuality 10(1). 13–24. https://doi.org/10.1075/jls.00010.jon.Search in Google Scholar
Lindqvist, Anna, Marie Gustafsson Sendén & Emma Renström. 2021. What is gender, anyway: A review of the options for operationalising gender. Psychology & Sexuality 12(4). 332–344. https://doi.org/10.1080/19419899.2020.1729844.Search in Google Scholar
Medeiros, Mike, Benjamin Forest & Patrik Öhberg. 2020. The case for non-binary gender questions in surveys. PS: Political Science & Politics 53(1). 128–135. https://doi.org/10.1017/S1049096519001203.Search in Google Scholar
miles-hercules, deandre & Lal Zimman. 2019. Normativity in normalization: Methodological challenges in the (automated) analysis of vowels among non-binary speakers. Paper presented at New Ways of Analyzing Variation 48, University of Oregon, 10–12 October.Search in Google Scholar
National Records of Scotland. 2021. Sex and gender identity topic report (Scotland’s Census, 2021). https://www.scotlandscensus.gov.uk/documents/sex-and-gender-identity-topic-report/ (accessed 13 April 2024).Search in Google Scholar
Robinson, Margaret. 2020. Two-spirit identity in a time of gender fluidity. Journal of Homosexuality 67(12). 1675–1690. https://doi.org/10.1080/00918369.2019.1613853.Search in Google Scholar
Ruberg, Bonnie & Spencer Ruelos. 2020. Data for queer lives: How LGBTQ gender and sexuality identities challenge norms of demographics. Big Data & Society 7(1). https://doi.org/10.1177/2053951720933286.Search in Google Scholar
Schmid, Maxwell & Evan Bradley. 2019. Vocal pitch and intonation characteristics of those who are gender non-binary. In Sasha Calhoun, Paola Escudero, Marija Tabain & Paul Warren (eds.), Proceedings of the 19th International Conference of Phonetic Sciences, Melbourne, Australia 2019, 2685–2689. Canberra: Australasian Speech Science and Technology Association. Available at: https://www.internationalphoneticassociation.org/icphs-proceedings/ICPhS2019/.Search in Google Scholar
Spiel, Katta, Oliver Haimson & Danielle Lottridge. 2019. How to do better with gender on surveys: A guide for HCI researchers. Interactions 26(4). 62–65. https://doi.org/10.1145/3338283.Search in Google Scholar
Spivak, Gayatri Chakravorty. 1988. Subaltern studies: Deconstructing historiography. In Ranajit Guha & Gayatri Chakravorty Spivak (eds.), In other worlds: Essays in cultural politics, 197–221. Oxford: Oxford University Press.Search in Google Scholar
Steele, Ariana. 2019. Non-binary speech, race, and non-normative gender: Sociolinguistic style beyond the binary. Columbus: Ohio State University MA thesis.Search in Google Scholar
Taylor, Jessica, Agnieszka Zalewska, Jennifer Joan Gates & Guy Millon. 2019. An exploration of the lived experiences of non-binary individuals who have presented at a gender identity clinic in the United Kingdom. International Journal of Transgenderism 20(2–3). 195–204. https://doi.org/10.1080/15532739.2018.1445056.Search in Google Scholar
Vincent, Benjamin William. 2018. Studying trans: Recommendations for ethical recruitment and collaboration with transgender participants in academic research. Psychology & Sexuality 9(2). 102–116. https://doi.org/10.1080/19419899.2018.1434558.Search in Google Scholar
Zimman, Lal. 2020. Transgender language, transgender moment: Toward a trans linguistics. In Kira Hall & Rusty Barrett (eds.), The Oxford handbook of language and sexuality. Oxford: Oxford University Press.10.1093/oxfordhb/9780190212926.013.45Search in Google Scholar
© 2024 the author(s), published by De Gruyter, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 International License.
Articles in the same Issue
- Frontmatter
- Editorial
- Editorial 2024
- Phonetics & Phonology
- The role of recoverability in the implementation of non-phonemic glottalization in Hawaiian
- Epenthetic vowel quality crosslinguistically, with focus on Modern Hebrew
- Japanese speakers can infer specific sub-lexicons using phonotactic cues
- Articulatory phonetics in the market: combining public engagement with ultrasound data collection
- Investigating the acoustic fidelity of vowels across remote recording methods
- The role of coarticulatory tonal information in Cantonese spoken word recognition: an eye-tracking study
- Tracking phonological regularities: exploring the influence of learning mode and regularity locus in adult phonological learning
- Morphology & Syntax
- #AreHashtagsWords? Structure, position, and syntactic integration of hashtags in (English) tweets
- The meaning of morphomes: distributional semantics of Spanish stem alternations
- A refinement of the analysis of the resultative V-de construction in Mandarin Chinese
- L2 cognitive construal and morphosyntactic acquisition of pseudo-passive constructions
- Semantics & Pragmatics
- “All women are like that”: an overview of linguistic deindividualization and dehumanization of women in the incelosphere
- Counterfactual language, emotion, and perspective: a sentence completion study during the COVID-19 pandemic
- Constructing elderly patients’ agency through conversational storytelling
- Language Documentation & Typology
- Conative animal calls in Macha Oromo: function and form
- The syntax of African American English borrowings in the Louisiana Creole tense-mood-aspect system
- Syntactic pausing? Re-examining the associations
- Bibliographic bias and information-density sampling
- Historical & Comparative Linguistics
- Revisiting the hypothesis of ideophones as windows to language evolution
- Verifying the morpho-semantics of aspect via typological homogeneity
- Psycholinguistics & Neurolinguistics
- Sign recognition: the effect of parameters and features in sign mispronunciations
- Influence of translation on perceived metaphor features: quality, aptness, metaphoricity, and familiarity
- Effects of grammatical gender on gender inferences: Evidence from French hybrid nouns
- Processing reflexives in adjunct control: an exploration of attraction effects
- Language Acquisition & Language Learning
- How do L1 glosses affect EFL learners’ reading comprehension performance? An eye-tracking study
- Modeling L2 motivation change and its predictive effects on learning behaviors in the extramural digital context: a quantitative investigation in China
- Ongoing exposure to an ambient language continues to build implicit knowledge across the lifespan
- On the relationship between complexity of primary occupation and L2 varietal behavior in adult migrants in Austria
- The acquisition of speaking fundamental frequency (F0) features in Cantonese and English by simultaneous bilingual children
- Sociolinguistics & Anthropological Linguistics
- A computational approach to detecting the envelope of variation
- Attitudes toward code-switching among bilingual Jordanians: a comparative study
- “Let’s ride this out together”: unpacking multilingual top-down and bottom-up pandemic communication evidenced in Singapore’s coronavirus-related linguistic and semiotic landscape
- Across time, space, and genres: measuring probabilistic grammar distances between varieties of Mandarin
- Navigating linguistic ideologies and market dynamics within China’s English language teaching landscape
- Streetscapes and memories of real socialist anti-fascism in south-eastern Europe: between dystopianism and utopianism
- What can NLP do for linguistics? Towards using grammatical error analysis to document non-standard English features
- From sociolinguistic perception to strategic action in the study of social meaning
- Minority genders in quantitative survey research: a data-driven approach to clear, inclusive, and accurate gender questions
- Variation is the way to perfection: imperfect rhyming in Chinese hip hop
- Shifts in digital media usage before and after the pandemic by Rusyns in Ukraine
- Computational & Corpus Linguistics
- Revisiting the automatic prediction of lexical errors in Mandarin
- Finding continuers in Swedish Sign Language
- Conversational priming in repetitional responses as a mechanism in language change: evidence from agent-based modelling
- Construction grammar and procedural semantics for human-interpretable grounded language processing
- Through the compression glass: language complexity and the linguistic structure of compressed strings
- Could this be next for corpus linguistics? Methods of semi-automatic data annotation with contextualized word embeddings
- The Red Hen Audio Tagger
- Code-switching in computer-mediated communication by Gen Z Japanese Americans
- Supervised prediction of production patterns using machine learning algorithms
- Introducing Bed Word: a new automated speech recognition tool for sociolinguistic interview transcription
- Decoding French equivalents of the English present perfect: evidence from parallel corpora of parliamentary documents
- Enhancing automated essay scoring with GCNs and multi-level features for robust multidimensional assessments
- Sociolinguistic auto-coding has fairness problems too: measuring and mitigating bias
- The role of syntax in hashtag popularity
- Language practices of Chinese doctoral students studying abroad on social media: a translanguaging perspective
- Cognitive Linguistics
- Metaphor and gender: are words associated with source domains perceived in a gendered way?
- Crossmodal correspondence between lexical tones and visual motions: a forced-choice mapping task on Mandarin Chinese
Articles in the same Issue
- Frontmatter
- Editorial
- Editorial 2024
- Phonetics & Phonology
- The role of recoverability in the implementation of non-phonemic glottalization in Hawaiian
- Epenthetic vowel quality crosslinguistically, with focus on Modern Hebrew
- Japanese speakers can infer specific sub-lexicons using phonotactic cues
- Articulatory phonetics in the market: combining public engagement with ultrasound data collection
- Investigating the acoustic fidelity of vowels across remote recording methods
- The role of coarticulatory tonal information in Cantonese spoken word recognition: an eye-tracking study
- Tracking phonological regularities: exploring the influence of learning mode and regularity locus in adult phonological learning
- Morphology & Syntax
- #AreHashtagsWords? Structure, position, and syntactic integration of hashtags in (English) tweets
- The meaning of morphomes: distributional semantics of Spanish stem alternations
- A refinement of the analysis of the resultative V-de construction in Mandarin Chinese
- L2 cognitive construal and morphosyntactic acquisition of pseudo-passive constructions
- Semantics & Pragmatics
- “All women are like that”: an overview of linguistic deindividualization and dehumanization of women in the incelosphere
- Counterfactual language, emotion, and perspective: a sentence completion study during the COVID-19 pandemic
- Constructing elderly patients’ agency through conversational storytelling
- Language Documentation & Typology
- Conative animal calls in Macha Oromo: function and form
- The syntax of African American English borrowings in the Louisiana Creole tense-mood-aspect system
- Syntactic pausing? Re-examining the associations
- Bibliographic bias and information-density sampling
- Historical & Comparative Linguistics
- Revisiting the hypothesis of ideophones as windows to language evolution
- Verifying the morpho-semantics of aspect via typological homogeneity
- Psycholinguistics & Neurolinguistics
- Sign recognition: the effect of parameters and features in sign mispronunciations
- Influence of translation on perceived metaphor features: quality, aptness, metaphoricity, and familiarity
- Effects of grammatical gender on gender inferences: Evidence from French hybrid nouns
- Processing reflexives in adjunct control: an exploration of attraction effects
- Language Acquisition & Language Learning
- How do L1 glosses affect EFL learners’ reading comprehension performance? An eye-tracking study
- Modeling L2 motivation change and its predictive effects on learning behaviors in the extramural digital context: a quantitative investigation in China
- Ongoing exposure to an ambient language continues to build implicit knowledge across the lifespan
- On the relationship between complexity of primary occupation and L2 varietal behavior in adult migrants in Austria
- The acquisition of speaking fundamental frequency (F0) features in Cantonese and English by simultaneous bilingual children
- Sociolinguistics & Anthropological Linguistics
- A computational approach to detecting the envelope of variation
- Attitudes toward code-switching among bilingual Jordanians: a comparative study
- “Let’s ride this out together”: unpacking multilingual top-down and bottom-up pandemic communication evidenced in Singapore’s coronavirus-related linguistic and semiotic landscape
- Across time, space, and genres: measuring probabilistic grammar distances between varieties of Mandarin
- Navigating linguistic ideologies and market dynamics within China’s English language teaching landscape
- Streetscapes and memories of real socialist anti-fascism in south-eastern Europe: between dystopianism and utopianism
- What can NLP do for linguistics? Towards using grammatical error analysis to document non-standard English features
- From sociolinguistic perception to strategic action in the study of social meaning
- Minority genders in quantitative survey research: a data-driven approach to clear, inclusive, and accurate gender questions
- Variation is the way to perfection: imperfect rhyming in Chinese hip hop
- Shifts in digital media usage before and after the pandemic by Rusyns in Ukraine
- Computational & Corpus Linguistics
- Revisiting the automatic prediction of lexical errors in Mandarin
- Finding continuers in Swedish Sign Language
- Conversational priming in repetitional responses as a mechanism in language change: evidence from agent-based modelling
- Construction grammar and procedural semantics for human-interpretable grounded language processing
- Through the compression glass: language complexity and the linguistic structure of compressed strings
- Could this be next for corpus linguistics? Methods of semi-automatic data annotation with contextualized word embeddings
- The Red Hen Audio Tagger
- Code-switching in computer-mediated communication by Gen Z Japanese Americans
- Supervised prediction of production patterns using machine learning algorithms
- Introducing Bed Word: a new automated speech recognition tool for sociolinguistic interview transcription
- Decoding French equivalents of the English present perfect: evidence from parallel corpora of parliamentary documents
- Enhancing automated essay scoring with GCNs and multi-level features for robust multidimensional assessments
- Sociolinguistic auto-coding has fairness problems too: measuring and mitigating bias
- The role of syntax in hashtag popularity
- Language practices of Chinese doctoral students studying abroad on social media: a translanguaging perspective
- Cognitive Linguistics
- Metaphor and gender: are words associated with source domains perceived in a gendered way?
- Crossmodal correspondence between lexical tones and visual motions: a forced-choice mapping task on Mandarin Chinese