Enterprise Forever
:UK => Hardware => Topic started by: gflorez on 2016.June.01. 16:06:48
-
I have opened this thread because I am very interested on "talking computers".
First of all we need some definitions.
The user can hear some speech from a computer, but it can be one of the following three classes:
-Sound Sample, only the plain digitised form of a vocal sound wave. Probably the best to the ear but it only can change in pitch. Example: HDIGI sounds.
-Allophones, a step further, the samples are truncated on simple and short human sounds like phonemes. Once combined, a text can be translated somewhat to sound. We loss quality and the sound is like a robot speech. Example Speakeasy, Amiga narrator device.
-Read-aloud. A program based on rules automatically translate a text to a string of allophones. Intonation or pitch is added when orthographic symbols are found, so the robot sound is "humanized". It is by definition chained to a country language. Example Amiga translator library.
This is talking about the vocal technology at the eighties... but today you can have a close to human Text-to-speech voice app on your phone reading you a book on your autoradio by bluetooth while you drive.... Example: Polish Ivona (https://www.youtube.com/watch?v=kz_0wnK2NN4)
-
Said that, HDIGI is not near similar to Speakeasy. HDIGI is well suited for music or voice digitalization, but Speakeasy has already the sounds stored in Rom, that once combined by the user and reproduced are listened like a speech.
You can make speak the EP with beautiful HDIGI samples, but they are canned, frozen in time.
On the other side, Speakeasy has not the same quality, but your EP can chatter a lot of sentences if correctly programmed.
HDIGI has been surpassed, I think that actually is much better to digitize the sound on a PC and convert it to EP sound. On the other side the robotic speech of Speakeasy has been also surpassed.
I like more Zozo's idea of using the algorithms of the SP0256 AL2 chip combined with better allophone samples stored on a Rom, may be governed by an EXOS driver....
I am dreaming...
-
I never heard speakeasy, but as I know it sound quality is like those 1 bit specy text-to-speech programs (there is EP version of one, but I dont remember the name, it was a resistent module, so it can callable using :speak or something)
-
Probably based on the same chip.
On the other side the Amiga (https://www.youtube.com/watch?v=BqUyovNS8rs) approach was software based.
-
this is some specy hw, but that EP util I mentioned was same quality (and it was a small program as I remember)
https://www.youtube.com/watch?v=GlARLD4vjmk
and yes, amiga is much more of course
-
On the other side the Amiga (https://www.youtube.com/watch?v=BqUyovNS8rs) approach was software based.
On EP there is Mikrobi, also software based voice playback program, it's quality terrible, I think much better could be written.
-
On EP there is Mikrobi, also software based voice playback program, it's quality terrible, I think much better could be written.
ah yes, mikrobi!
so as I know, speakeasy is same quality XD
-
ah yes, mikrobi!
so as I know, speakeasy is same quality XD
Then speakeasy is waste of money :D
-
Intellivision (https://www.youtube.com/watch?v=Hwo2bw_m8E0) had a module based on the same chip, so you can make an idea of how a Speakeasy can sound.
-
Intellivision (https://www.youtube.com/watch?v=Hwo2bw_m8E0) had a module based on the same chip, so you can make a idea of how a Speakeasy can sound.
ok, this is better than mikrobi
-
Some Intellivision emulators offer Intellivoice emulation. May be if the source is found we don't have to reinvent the wheel.,.
-
For example Bliss, an emulator written in Java.
http://www.zophar.net/ivision/bliss.html
Xanadu, a port of Bliss on C++
http://www.zophar.net/ivision/xanadu.html
There are others in this page:
http://www.zophar.net/ivision.html
-
This is the Intellivoice part extracted from the source of the Jzintv emulator:
-
I've been tempting LGB to implement Speakeasy on his superb XEP128 emulator.
He likes very much the idea, but he sees some problems.
First of all, XEP actually doesn't emulate very well the Enterprise sound, needing to re-write great parts of the code.
And secondly, there are some legal aspects that must be pondered, because the chip inside the device maintains a copyright on the internal Rom and the algorithms used.
----
Observing the works of Joseph Zbiciak(JzIntv Intellivision emulator (http://spatula-city.org/~im14u2c/intv/) author), he has opted for a mixed approach, he has reverse engineered the SP0256 chip, putting the engine on GNU. Then he supplies the Rom(2Kb) for personal use only.
I think that on the EP it could be done the same, making use of Joseph Zbiciak SP0256 engine code, like XEP actually do with the Z80 emulator code. Then only a link to the internal Rom is needed, so the responsibility of bad use is put on the user.
Just for doing it better, a similar permission can be requested to Microchp(actual owner) like Joseph Zbiciak actually has (http://spatula-city.org/~im14u2c/sp0256-al2/).
The allophones in the little Rom in reality are like recipes for the needed human sounds, much like the Thermomix cooking robot ones, so it can happen that better recipes will be found.
Then the user could have more and better sounds than the official ROM, by using a custom one.
-
I've been tempting LGB to implement Speakeasy on his superb XEP128 emulator.
Hemm, I always feel puzzle because of the adjectives you use with Xep128 :-P
He likes very much the idea, but he sees some problems.
First of all, XEP actually doesn't emulate very well the Enterprise sound, needing to re-write great parts of the code.
And secondly, there are some legal aspects that must be pondered, because the chip inside the device maintains a copyright on the internal Rom and the algorithms used.
Indeed. But forgetting the "boring" legal problem: Xep128's main problem is about the (non-existing ...) sound infrastructure and sync with the emulation ... Audio currently is only a hack, and I am surprised a lot, that it more or less works without major glitches (ok, it can be noticed sometimes still ...) because of not sync'ed emulation of audio buffering/output etc. As any audio part related stuff *MUST* be rewritten in the future, I feel a bit useless to include more audio related stuff before this point. The second problem with this Speakeasy: that chip is a CPU for real it turned out. The ROM does not contain audio samples but actually instructions. OK, not a fully generic CPU as Z80 is, but still. Thus a very exact emulation would require to emulate its internals. And you must do it in *parallel* of the EP-related stuffs (ie, CPU/Nick/Dave, all needs to do "parallel" even if it means just call the handlers rapidly in the main loop for these emulation handlers). Just for a speakeasy, it simply does not worth to include another performance critical stuff in the main loop of the emulator, which would slow things down for every emulation, at by every Z80 opcodes executed (since currently that is the "elementary" time what Xep128 can emulate in "one step", Nick/Dave "ticks" are calculated from the executed CPU t-cycles - well it's not the most precise way, I guess for example ep128emu has the Nick slot frequency as the "basic timing factor" or such but the theory is similar after all, just the details are different).
So, the conclusion: I guess, from the view point for Enterprise-128 emulator, it is really not important *how* that chip works *internally*. The important factor, that EP softwares can use that hardware should work with the emulator, regardless how I implement it. Ie, just outputting values to printer port, that's all. So I can even have some tables with digitalized sound samples for the allophones so I would not emulate the chip's internals to generate those. This also solves licensing/patent problems of the algorithm used by the chip, and also it wouldn't slow down the main loop ("the heart") of the emulator. Then what I need are "only" these:
* sane audio infrastructure implemented in Xep128
* having the allophone sample table (probably with some extra data eg repeating points, or such?)
I really don't see the value to emulate the chip internals ... It would make sense if EP can *modify* the internal program executed by the chip, then yes, it would be needed ... But just to play sounds, it simply does not make sense to emulate at that low level of that chip. Of course, it's still an option to make (or use, eg the sources, you mentioned!) a close emulation of the hardware used to *generate* the wave table. But only that table is used by Xep128, which is already the raw sound output, and nothing more.
-
Yes, only the sound samples of the allophones are needed for a close emulation of Speakeasy, as they are canned, not modifiable. You even doesn't need the Speakeasy device to obtain them, as they can be recorded simulating the chip on PC with the provided code.
I've been playing with Bliss, another Intellivision emulator written in Java, with the five vocal games that where released for the console. The sound is very robotic for actual standards, but on the other side I can understand clearly the words spoken. There isn't noise under the phonemes, and it aids to comprehend better the messages. Curiously there is some form of intonation on the games, aspect that I've not seen on the SP0256 documentation.
But.... if the complete chip can be emulated, then a new bigger Rom can be created with 256 allophones instead of only 64, with space to obtain "foreign" sounds for our country languages. I am only dreaming....
-
Well, yes. Someone can even record his own voice for samples :) Or other source, for a better quality. And even allophones needed for other languages. However it raises a question at a point, that what we need. To emulate a hardware? Which sounded as ... well as it sounded ... Or an imagined better one which is nothing to do then with any actually existing hardware. But that point is already a bit out of scope, as it's not an "emulator" anymore, at least not an emulator which emulates existing hardware ... Which is maybe not a bad thing, however it's kinda specific to the actual emulator only ...
-
I have found a set of "distorted" allophones (http://www.cpcwiki.eu/imgs/5/52/Allophones.zip). distorted because they are repeated the wrong number of cycles.
If for me.... I would go for a complete emulation, then for a complete alternate Rom, then to implement a form of intonation, then....
But I'm not the emulator writer...
-
That was 8bit wav, but this (http://milkcrate.com.au/_other/downloads/sample_sets/little-scale_SP0256-AL2.zip) set is 16bit and the proper number of cycles.
-
But I'm not the emulator writer...
You can be :) Actually, around the year 2000, I read about EP-128, without even knowing what it is (I haven't seen/used EP before ...), so I wanted to write an emulator since about 16 years? :) And actually only in the last few years, I have some "sane" result only. So I guess, if somebody is really serious (much more than I was ...) it's not so hard to do it after all :)
-
gflorez, check this
https://enterpriseforever.com/sound/beszedprogram-fejlesztese/
-
That was 8bit wav, but this (http://milkcrate.com.au/_other/downloads/sample_sets/little-scale_SP0256-AL2.zip) set is 16bit and the proper number of cycles.
wow, interesting, on the internet we can find everything
but the sample quality is bad. why? when I started to develop a text to speech program (17-18 years ago), I started to build a sample data base like this (digitizing my voice) and the quality was much more better.
-
This is an "electronic" throat, not real recorded voices. It is seventies technology....
Think of that times, the memory was the most expensive part of a computer. What was achieved with the SP0256 was to have a speech stored like a text instead of large sample files.
Inside the chip there is a "little" 2Kb Rom with recipes to mould the vocal filter to form the 59 English allophones. There aren't samples of the voices inside.
-
This is an "electronic" throat, not real recorded voices. It is seventies technology....
Think of that times, the memory was the most expensive part of a computer. What was achieved with the SP0256 was to have a speech stored like a text instead of large sample files.
Inside the chip there is a "little" 2Kb Rom with recipes to mould the vocal filter to form the 59 English allophones. There aren't samples of the voices inside.
ah I see
but I think a sampled talk-to-speech program is possible on EP with 128k mem, with very good voice quality
-
You started to develop a text to speech program.
It was for the Enterprise?
-
I have joined some allophones....
-
You started to develop a text to speech program.
It was for the Enterprise?
no it was pc
but the code was very simple (it fades the samples into each other), but I havent time/energy to make the samples.
so I only made the ab ac ad ae etc samples and some others for test.
so it can say "elmegyek haza" (Im going home) and other simple words. but the quality was very very good.
I made a simple editor where I can set the fade points, sample lenghts etc.
sad, I have no source left (it coded in Delphi)
-
Meanwhile....
I have found that Prodatron has adapted a CPC speech commercial program to SymbOS("SSA-1" Amsoft 1986 (https://www.youtube.com/watch?v=YcgUZG_jz20) or "Speech" Superior Software 1986 (https://www.youtube.com/watch?v=Y9fK5foeDuo), as I think they are the same program).
The Speech app only works on the CPC port of SymbOS, as it is hardware specific(probably it uses the AY chip), but Prodatron has gently released the disassembly.
(http://www.symbos.de/gfx/shots/tools/symbos-tools-speech1.gif)
-
Now I have discovered that SSA-1 (http://www.cpcwiki.eu/index.php/Amstrad_SSA-1_Speech_Synthesizer) for CPC was a peripheral also based on the SPO256-AL2 Speech Synthesizer, so I think Prodatron has adapted other software only application made by Amsoft or sponsored by the Amstrad software distribution company.
Observe that the SPO256-AL2 chip has fixed pitch, and the SymbOS app has a pitch selector like the "SPEECH" Superior Software program has.
-
Then this program can be converted to Speakeasy?
On Spectrum the Currah uSpeech also use SP0256, about 70-80 games support it. Probably these also can be converted to the Speakeasy.
-
With the Z80 code probably it can be easier to get quality software speech on the Enterprise or convert the SymbOS app to EP hardware.
Surprisingly the app is very little. I still had no time to study it.
-
The app is on the SymbOS 3.0 beta we have. It loads and seems to work without hanging, but of course it doesn't sound.
I've been thinking about the other sound apps on SymbOS. Maybe someone can convert them to Dave use, if the code is at hand.
-
The SymbOS Speech app is relocatable, but has 8Kb of fixed address code.
Inside that area there are the original Amsoft Speech routines.
Every time the program is executed, the Amsoft code is relocated to work, by means of a table that holds all the fixed addresses.
Once relocated there are six entries, Pitch, Say, Speak, Left, Centre and Right, but I still haven't disassembled them.
-
This is the Amsoft program in hex, simply cut from the Speech app source. Also the bin code.
It is placed at 0000h but begins at 0100h like on the Enterprise, so there are 256 leading zeros.
The entry points used by Prodatron are at the middle, beginning and consecutive at 119Eh, Pitch=jp 11F1h, Say=jp 1275h, Speak=jp 124Ch, Left=jp 11D6h, Right=jp 11DFh and Center=jp 11E8h.
The second half of the code is occupied by a dictionary of rules to translate English words(Speak) to Allophones(Say).
-
The only two AY calls:
------------------------------------------------
012e 06f4 ld b,0f4h
0130 ed79 out (c),a
0132 06f6 ld b,0f6h
0134 3e80 ld a,80h
0136 ed79 out (c),a
0138 af xor a
0139 ed79 out (c),a
013b c9 ret
------------------------------------------------
013c 06f6 ld b,0f6h
013e 3ec0 ld a,0c0h
0140 ed79 out (c),a
0142 06f4 ld b,0f4h
0144 ed59 out (c),e
0146 06f6 ld b,0f6h
0148 af xor a
0149 ed79 out (c),a
014b c9 ret
------------------------------------------------
And this is one of the routines which call them:
01e3 322001 ld (0120h),a
01e6 221e01 ld (011eh),hl
01e9 1e01 ld e,01h
01eb cd3c01 call 013ch
01ee af xor a
01ef cd2e01 call 012eh
01f2 1e00 ld e,00h
01f4 cd3c01 call 013ch
01f7 af xor a
01f8 cd2e01 call 012eh
01fb 1e07 ld e,07h
01fd cd3c01 call 013ch
0200 3e3e ld a,3eh
0202 cd2e01 call 012eh
0205 3a2d01 ld a,(012dh)
0208 5f ld e,a
0209 cd3c01 call 013ch
020c 3a2001 ld a,(0120h)
020f 57 ld d,a
0210 2a1e01 ld hl,(011eh)
0213 1e3f ld e,3fh
0215 7e ld a,(hl)
0216 e60f and 0fh
0218 cd2e01 call 012eh
021b 3a1c01 ld a,(011ch)
021e 47 ld b,a
021f 10fe djnz 021fh
0221 7e ld a,(hl)
0222 cb2f sra a
0224 cb2f sra a
0226 cb2f sra a
0228 cb2f sra a
022a e60f and 0fh
022c cd2e01 call 012eh
022f 3a1c01 ld a,(011ch)
0232 47 ld b,a
0233 10fe djnz 0233h
0235 00 nop
0236 00 nop
0237 00 nop
0238 23 inc hl
0239 1d dec e
023a c21502 jp nz,0215h
023d 15 dec d
023e c21002 jp nz,0210h
0241 c9 ret
I suposse that is the only part of the code that we must modify to port the app to the Enterprise.
Sorry I can't go further as I don't know a word of sound chips....
-
As I see 013c is AY register select, 012e writes AY data to the previously selected register
-
Do you think that the same or similar action can be done with Dave?
-
Unfortunately similar can not be done, or can, but most of the musics would not be so good.
In these routines we should store the values, and play them back in 50 hz interrupt by ay simulation routine, or here store the values, and play it back with ay emulation routine. About 100 additional bytes are needed.
-
Sorry, i mixed , i thought this code is from simamp. This code can be substituted for dave quite easily, it plays digi only
-
You can try to substitute the player code at 01e3h with the content of zip file, it should work, I hope the timing is also good.
Code at 012e, and 013c can be dropped, or leave as it is.
-
oops, please substitute one RRCA with one NOP, because at the 1st register write max volume value is 3ch, and at the second volume write is 1eh if RRCA remains :oops:
-
Thanks!
Only that I put that routine as an example. There are others that call to 012e and 013c....
I don't have the disassembly here at home so, more tomorrow.
-
Only this other routine uses the AY calls:
0176 322001 ld (0120h),a
0179 221e01 ld (011eh),hl
017c 1e01 ld e,01h
017e cd3c01 call 013ch
0181 af xor a
0182 cd2e01 call 012eh
0185 1e00 ld e,00h
0187 cd3c01 call 013ch
018a af xor a
018b cd2e01 call 012eh
018e 1e07 ld e,07h
0190 cd3c01 call 013ch
0193 3e3e ld a,3eh
0195 cd2e01 call 012eh
0198 3a2d01 ld a,(012dh)
019b 5f ld e,a
019c cd3c01 call 013ch
019f 3a2001 ld a,(0120h)
01a2 87 add a,a
01a3 87 add a,a
01a4 87 add a,a
01a5 57 ld d,a
01a6 cd0103 call 0301h
01a9 e63f and 3fh
01ab 4f ld c,a
01ac 2a1e01 ld hl,(011eh)
01af af xor a
01b0 0600 ld b,00h
01b2 ed4a adc hl,bc
01b4 1e08 ld e,08h
01b6 7e ld a,(hl)
01b7 00 nop
01b8 00 nop
01b9 00 nop
01ba e60f and 0fh
01bc cd2e01 call 012eh
01bf 3a1c01 ld a,(011ch)
01c2 47 ld b,a
01c3 10fe djnz 01c3h
01c5 7e ld a,(hl)
01c6 cb2f sra a
01c8 cb2f sra a
01ca cb2f sra a
01cc cb2f sra a
01ce e60f and 0fh
01d0 cd2e01 call 012eh
01d3 3a1c01 ld a,(011ch)
01d6 47 ld b,a
01d7 10fe djnz 01d7h
01d9 23 inc hl
01da 1d dec e
01db c2b601 jp nz,01b6h
01de 15 dec d
01df c2a601 jp nz,01a6h
01e2 c9 ret
Just injecting the two modified routines we can have Speech on the Enterprise SymbOS port...
The difficult task will be discover how to inject it on the App.
The app let the user change pitch of the voice, I assume this is done on the software before calling the AY routines.
But the user can also put the voice on any of the three channels(L, C and R). Do you think this will work with your modifications?
-
Port select is not working with my modification, i use D/A. The question is that what is important, the possibility to choose speaker direction (L C R) or the possibility to choose the volume port. I ask it because for the 1st we can use internal D/A, in this case the voice will be loud, for 2nd we are not able to use int D/A and the voice will be silent.
-
The package contains modified stuff for both routines, and sound output side selection.
-
That are the requirements of the SimbOS app, pitch slider, say(allophones), speak(English text) and left, right or centre selection. You can see it on the SymbOS screen I post.
There isn't volume selection, we can asume medium here
Later, if all goes right, a good Enterprise text2speech can be done with the Amsoft code but exploiting Dave characteristics better.
-
Thanks again geco.
Here you have the English-Allophones dictionary, only 299 rules. I remember the Amiga rules to be a lot more....
Edit: Amiga English American rules are exactly 701, but a lot more complex, including pronunciation exceptions.
-
There are some odd words in the list, like SUPERIOR or DAVID. I think this program is the Superior Software one, re-selled by Amsoft.
Superior Software made similar text to speech programs for other microcomputers, like for example the BBC, one of the odd words included....
https://en.m.wikipedia.org/wiki/Superior_Software
-
The two Dave routines, modified by geco, can fit on the 256 bytes unused memory area at the beginning of the Amsoft code.
Then, the EP SymbOS app will be only slighty longer than the CPC original. Once fixed the Amsoft code, the calls to the Dave routines must be inyected.
-
The modified code fits to it's original place also :)
-
I know, but Prodatron kept the original code untouched, probably to not infringe Amsoft copyright. Then he only relocates the absolute jumps, calls and data addresses at execution time.
On the other side I can paste your routines over the old ones, also at execution time.
It will be more laborious, but I will do with the same care as Prodatron.
-
I have finished the source code, but sjasm doesn't accept some commands of the assembler used by Prodatron.
On the other side, once compiled, the executable will have the .bin tag like the first SymbOS port we had, I don't know if it works the same as the .exe tag, common on the beta 3.0 we manage now.
Edit: the two headers look similar.
-
I guess the following commands are not accepted:
db #dd:ld h,a which is LD IXH,A
-
db #dd:ld h,a
SjASMPlus can handle multiple instructions with colon.
But also possible simply convert to two lines:
db #dd
ld h,a
-
There are other addressing calculations that are not accepted.
I don't know too much about assemblers, can you do the build?
-
please compare aaa (compiled binary) with the original binary, there is still one error, but the reason is duplicate label, so I think something should be changed, but without original binary I do not know what.
-
Seems that the source is old, but probably the bin is on the first Enterprise SymbOS build.
I will look for it.
-
This is the oldest appspeech I've found, from 2005.
Also I have corrected the duplicate label observing the source of other apps.
Try it, if it doesn't work we always can ask to the creator...
-
I have little patience...... I have dirty hex-edited the executable to inject geco's Dave routines by hand.
See the English SymbOS thread....
-
At last I've disassembled all the Amsoft Speech code.
The original code weights about 8Kb, but removing the dictionary, leaving exclusively the allophones parsing, probably it can be condensed on 4Kb or less.
The only Enterprise game that uses the Speakeasy device is Eat-it-up, not a lot of memory demanding game, as it only weights 12,5Kb.
The way of work of the two speech systems is similar, Speakeasy receives strings of allophones(internally it works with 59 codes for the allophones plus five more codes for pauses ) through the Parallel port, and the Amsoft code also can be fed with strings of allophones. The only problem is that the Speech code needs Z80 time to sound.
However the nomenclature of the two systems of allophones is different, but I've found easy to construct intelligible words with the Amsoft code, probably better than with Speakeasy, that doesn't admit stresses inside the words(tildes) and has the pitch fixed on Rom, unlike the Speech code.
This, thinking on a way of emulating Speakeasy, but this tiny code can be incorporated on newer games or programs easily.
Improvements on the sound can be achieved modifying the samples, but I'm not the man that can do that....
-
I can see some differences on the two allophones lists, Speakeasy and Amsoft Speech.
Almost all of them match, even the representation of the sound, of course there are sounds that are not represented on the two list. Speech has more sounds, but Speakeasy has more variations of the same sound.
Speakeasy chip sb0256: http://courses.cit.cornell.edu/ee476/Speech/SPO256-AL2.pdf
Amsoft Speech list:
"%" Pause 2
"1" to "9" stress or tilde on vowels and "/H". "9" lower note, "1" higher note, contrary to what it may seem.
You can put a number just after every vowel. I thing Speech can almost sing.
"." End of line
"?" Interrogation
"AI" "ER"
"A0" "EE"
"AH" "EH"
"AY"
"AW" "OO"
"AE" "OW"
"AA" "OY"
"OH"
"UU"
"UH" "/H"
"UW"
"UX"
------------------------------------
"D" "CT"
"DH" "CH"
"DR"
"DU" "Z"
"ZH"
"T"
"TH" "S"
"TR" "SH"
"N" "B"
"NX"
"R"
"L"
"M"
"V"
"K"
"P"
"W"
"J"
"Y"
"F"
"G"
Who has peeked the disassembly code could have realised that the sounds are formed of playing sample chunks of 63 bytes(but almost all of them measure 64) a number of cycles from 1 to 9.
There are complex sounds composed of two or three different samples.
But the most strange thing is that there are two chunks that are double than normal. The extra sample code is not used by the playing routines, so I think they are discarded sounds, not used on the commercial release.
-
Injecting the Speech sample numbers on a Wav header file(6Kherz, 8bit) I have been able to ear some of the voices with Wavelab, looping the chunks.
I've put at least 16x7Fh between chunks to mark the start and the end of the chunks.
Edit: 127(7f) is the intermediate point, not zero, that is the lower limit of the wave.
-
I was totally wrong(but not as wrong....).
The samples on Amsoft's Speech are packed on 4 bits. The playing routine first takes the lower nibble and shifts it left two bits, and secondly the high nibble is shifted right another two bits. Then the range of the played samples is from 0 to 64 in multipliers of 4. There are 126 samples on every chunk, not 63.
To hear them on a PC I have unpacked the nibbles, shifted them correctly and added 95 to get 127 as the medium point.
Curiously the resulting wave and sound is very similar.
The upper wave is the wrong one.
-
Unfortunately similar can not be done, or can, but most of the musics would not be so good.
In these routines we should store the values, and play them back in 50 hz interrupt by ay simulation routine, or here store the values, and play it back with ay emulation routine. About 100 additional bytes are needed.
I've disassembled(with unknown memory addressing) the APPSYAMP.EXE SymbOS application, looking for the CPC AY output routines.
I think that partial EP sound can be achieved if Prodatron releases the player's source.
1a3a 11f482 ld de,82f4h
1a3d 43 ld b,e
1a3e ed49 out (c),c
1a40 01c0f6 ld bc,0f6c0h
1a43 ed49 out (c),c
1a45 0e00 ld c,00h
1a47 ed49 out (c),c
1a49 04 inc b
1a4a 3e92 ld a,92h
1a4c ed79 out (c),a
1a4e cbf1 set 6,c
1a50 06f6 ld b,0f6h
1a52 ed49 out (c),c
1a54 43 ld b,e
1a55 ed78 in a,(c)
1a57 0100f7 ld bc,0f700h
1a5a ed51 out (c),d
1a5c 05 dec b
1a5d ed49 out (c),c
1a5f c9 ret