Online research relies heavily on platforms and panels, who sample predominantly from wealthy, educated, industrialized, rich and democratic countries (a.k.a., WEIRD), raising concerns for biases. Some platforms recently aim outside the West, to more GREAT (Growing, Rural, Eastern, Aspirational, and Transitional) areas. We explore how two such global platforms (Toloka and Besample) perform on data quality, in key aspects of attention, compliance, honesty, reliability, naivety and replicability, compared to West-centric platforms including Amazon Mechanical Turk (MTurk), Prolific, CloudResearch and Connect. We find that global platforms’ participants fail attention-check questions more than Prolific, CloudResearch and Connect, but not more than MTurk participants. Most attention failures were among participants with relatively low English proficiency. In the other aspects of compliance, honesty, reliability, naivety and replicability, the global participants perform comparably or somewhat inferiorly to Prolific, CloudResearch and Connect, but perform better than MTurk. This demonstrates an interesting phenomenon of global participants providing valid responses even after failing attention-check questions. This, we find, can be partially explained by language proficiency. We also examine how differences in data quality can be explained by location, usage patterns, demographic patterns and the time of day across time zones.