Any serious reading of "Continental" philosophy causes these sorts of questions to seem absurd, and what is interesting is that a PROGRAMMER, hero computer scientist, thought them ungrammatical, ill formed and absurd.
"Can ai be conscious?" would be an interesting question if ai met one tenth of its technical promises, indeed, if it worked.
But what it proves is something of which Kant and Hegel were aware. Tis is that in fact the ability to reason-with-symbols is (1) mechanizable, (2) not definitiational of intelligence.
The "reasonintg" that may be definitional is that which subsumes itself (in, for example, Kant's transcendental analytic) in a benigh-rather-than-vicious circle and which participates in a collective and a social process of reasoning: not mathematical calculation at all.
What's amusing is that Hegel "knew" how to frame and answer what happens to be a question that exceeds the topos of natural science: for he called the mechanist explainers of his time, along with various quacks, "phrenologists" who pretended to crudely divide the brain in an operation which presumes that social organization is secondary without argument.
Words, my lord: words, words words. 17th century philosophers knew all about the structure of the real workd which consists, with an apparent irreducibility, of material extension in space and time, and an antiworld consisting of sensory qualia, dreams and reflections.
Reducing the one to the other is folly, and a philosopher's stone. Apprehending instead why it is we are even able to know the existence of the two worlds, and find ways to relate them, is what life is all about.
Toe believe in AI is dehumanizing for it presents the logical possibility of an impossible reduction. This is used by totalitarians of all stripes (including "libertarians") to excuse dehumanization.
But what is most germane to this ng is that belief in "ai" is positively correlated with inability to do math or program computers. "AI" consists merely of ill-understood software and it is a programmer's responsibility to understand his software. Sure, an expert system or neural net can "learn" and create a data base of useful facts. But this data base's utility depends STRICTLY on continued expert oversight, and legally so in the case of mission critical software.
The human element is necessary, and this happens not to be a scientific assertion at all. Instead its is philosophical and synthetic apriori: necessarily true. Transcendental arguments will be provided on request.