# Sleep and Dreams > Research >  >  Mental imagery its limits and mechanisms

## ATA

PAGE IS UNDER CONSTRUCTION NOW I TRY TO MOVE MY NOTES HERE IT TAKE SOME TIME 
* i sort them and translate later if i have enought time 

My research is mainly focused on images of the type of objects or environments and related mechanisms.    


Some info about other types :
-----------------------------
*Entoptic phenomenon*
Entoptic phenomenon - Wikipedia, the free encyclopedia

*PURKINJE’S VISION* 
http://monoskop.org/images/6/6f/Wade...uroscience.pdf

----------------------------
*Closed-eye hallucination*
Closed-eye hallucination - Wikipedia, the free encyclopedia

----------------------------
*Geometric imagery* 



*Uncoiling the spiral: Maths and hallucinations*
Uncoiling the spiral: Maths and hallucinations | plus.maths.org

*A Model for the Origin and Properties of Flicker-Induced Geometric Phosphenes* 
PLOS Computational Biology: A Model for the Origin and Properties of Flicker-Induced Geometric Phosphenes

*What geometric visual hallucinations tellus about the visual cortex*
http://www.math.uh.edu/~dynamics/reprints/papers/nc.pdf

*Flicker-light induced visual percepts:Frequency dependence and specificity of whole percepts and percept features*
http://www.carsten-allefeld.de/pub/fip.pdf

ELECTROPHYSIOLOGICAL CORRELATES OF FLICKER-INDUCED FORM HALLUCINATIONS
http://www.ispsychophysics.org/fd/in...ewFile/566/557

*Generating Vivid Geometric Hallucinations using Flicker Phosphenes with the “Neurolyzer Table”* 
Generating Vivid Geometric Hallucinations using Flicker Phosphenes with the “Neurolyzer Table” - SaikoLED

----------


## ATA

Fovea angle/size of object


*SIZE* 

Vertical
 
Horizontal

Pink: line of sight
Green: normal vision physical limitation by tissue arount the eyes , nose 
Blue: non-restricted vision field (form a circle)
Red: limit of visul surface given my max eye movement (yellow binocular)


[spoiler]
max eye positions :
up

down


[/spoiler]
Eye visual fieled size by retinotopic mapping:
[spoiler]



[/spoiler]

*Mental imagery visual angle* 

A) in fixed eye condition
The main factor limiting visual angle of image  is the requirement to display the entire image in sufficient details and color.(sufficient details can be more accurately determined by task or image type ) 
Color vision and perception of detail has its visual angles limits.
In moust cases is visual angle is about 20 degrees,its like 30cm high object at arm's lenght distance.(can by up to about 50 degrees)





*image is centered to fovea cortical reperesentation
*visual angle is limited by cortical reperesentation and its acuity witch reflect eye acuity  
*This angle apply for the first appearance of the image in the case of visualization of the object without background.In case of the visualization of the scene ,imagination of object and focusing to its part,mental zooming , this limit does not apply.In these cases visual angle can be bigger but acuity in angles above about 50 degrees is very poor.

Visual angle counter : Sensation & Perception: Visual Angle

study:
Measuring visual angle of minds eye http://wjh-www.harvard.edu/~kwn/Koss...leMindsEye.pdf

----------


## ATA

The information above provide an explanation why images at the beginning hypnagogia fill only a part of visual filed , becouse eyes dont moves . Next part will about the eyes movements and and how they affect images.

*Eye Movements*

a)Eye movement are very important for creating image and recall form memory.

Fixed gaze limits usabe visual angle and block image scanning witch cause problems with recall from memory and image construction.

[spoiler]
source: http://cosy.informatik.uni-bremen.de...e_july_13.html
Eye Movements
Several studies report the occurrence of spontaneous eye movements during mental imagery. These studies usually present participants with a stimulus which they later mentally imagine to describe or answer questions about. It is generally found that eye movements during such imagery tasks reflect the content of the mental image (e.g., Brandt & Stark, 1997; Spivey & Geng, 2001; Laeng & Teodorescu, 2002; Demarais & Cohen, 1998; Johansson, Holsanova, & Holmqvist, 2006; Johansson, Holsanova, Dewhurst, & Holmqvist, 2011).

In the experiments reported by Johansson et al. (2006) and Johansson, Holsanova, Dewhurst, and Holmqvist (2011), a distinction between local and global correspondence of eye movements to the processed content of the mental image is defined. Global correspondence requires that an eye movement is not only directed towards the expected direction, e.g., to the left when processing the spatial relation left of, but also to a location consistent with the participant’s gaze pattern over the whole experiment, i.e., the gaze is directed to the same location every time the same entity is referred to. Local correspondence requires the eye movement to only match the expected direction. Johansson et al. (2006) and Johansson, Holsanova, Dewhurst, and Holmqvist (2011) report experiments in which participants were either shown a complex and detailed picture or were presented with the verbal description of a complex and detailed scene. After this perception phase an imagery phase followed in which participants had to describe the picture/scene from memory while their eye movements were tracked. During this phase participants were facing a blank white screen. It was varied whether participants are allowed to freely move their eyes during the perception phase and during the imagery phase. For participants allowed to freely move their eyes during both phases, there is a significant local and global correspondence of their eye movements to the mental image. These results were reproduced in total darkness. The correspondence remained significant even when participants were forced to keep a fixed gaze during the perception phase. When participants had to keep a fixed gaze during the imagery phase after freely moving their eyes during the perception phase, it was found that recall is inhibited. Participants reported significantly less detail, objects, and locations compared to a control group. Furthermore, an analysis of the given verbal description showed that participants reported more abstract properties of the stimulus, e.g., global gestalt properties, whereas a control group reported more concrete details. These results provide evidence that eye movements during mental imagery are 1) functional for the recall of information from a mental image; 2) occur independently of the input modality of the stimulus; and 3) are not exact re-enactments of the eye movements of the visual perception of the stimulus.

Furthermore, it has been found that the spatial dispersion of eye movements during mental imagery depends on individual differences (Johansson, Holsanova, & Holmqvist, 2011). The spatial mental imagery score of the “Object-Spatial Imagery and Verbal Questionnaire” (Blazhenkova & Kozhevnikov, 2009) was found to be negatively correlated to the spatial dispersion of the gaze pattern produced during mental imagination of a complex scene. This spatial mental imagery score reflects a person’s preference and ability to use spatial mental imagery compared to, for example, visual mental imagery or language-like thought. Concretely, the spatial distribution of the eye movements, that is, the area participants looked at during imagery shrinks with higher scores in the ability to use spatial mental imagery.

Summarizing, the literature reports 1) the robust occurrence of spontaneous eye movements during mental imagery; 2) that these eye movements reflect the content of the mental image; 3) that forcing a fixed gaze affects mental imagery performance; and 4) that individual differences affect eye movements, in particular, their spatial dispersion

---
6.3 Eye Movements

As reviewed in Chapter 2, the literature reports the robust occurrence of spontaneous eye movements during mental imagery. These eye movements reflect the content of the mental image and have been shown to be functional in mental imagery. In particular, the recall of memories using mental imagery is negatively affected qualitatively and quantitatively when participants have to maintain a fixed gaze. Furthermore, individual differences in the spatial dispersion of such spontaneous eye movements have been reported.

6.3.1 Eye Movements in PIT

The computational model of PIT directly incorporates spontaneous eye movements during mental imagery because saccades are part of the perceptual actions used in visual perception. During mental imagery these same perceptual actions are employed to instantiate the conceptual description a mental image is based on. The model implements a distinction between overt and covert attention shifts. Overt attention shifts are assumed to correspond to, in particular, spontaneous eye movements, whereas covert attention shifts correspond to non-observable attention shifts such as within the periphery of one’s gaze. The distinction between overt and covert attention shifts is made based on the length of the vector that represents the attention shift. That is, vectors with a length larger than the a-priori set threshold, will be executed as overt attention shifts, i.e., eye movements. This means that if attention is shifted beyond a certain distance from the current focus of attention, the attention shift will be observable as a spontaneous eye movement.

6.3.2 Functionality of Eye Movements

In PIT, attention shifts are functional for mental imagery; they reflect the currently processed content and their suppression will restrict instantiation and thereby the generation and inspection of the mental image. These properties follow straight-forwardly from the fact that mental imagery is realized by the instantiation of mental concepts and that the process of instantiation is realized by employing perceptual actions such as (overt) attention shifts. If the spatial relation left-of is instantiated during mental imagery this could be realized by a respective eye movement which would then also directly reflect the currently imagined spatial relation. If eye movements are suppressed then consequently instantiation is inhibited. This means that the processing of the mental image is inhibited in so far as overt attention shifts cannot be executed. This will restrict the generation and inspection of the mental image. It has been shown that keeping a fixed gaze during mental imagery produces such inhibitions in recalling content of the mental image independent of how the to-be-imagined stimulus has been presented previously, i.e., verbally or visually. This finding is in line with PIT’s assumption that the mental concepts underlying mental images are the result of the integration of all modalities. That is, the instantiation is not directly related to the mode of perception of the to-be-instantiated mental concept.

Another aspect of the inhibition of mental imagery due to keeping a fixed gaze is the fact that not only the amount of information, e.g., the number of recalled entities of the stimulus, is decreased, but, additionally, also the quality of what is recalled, i.e., the type of information, changes when eye movements are inhibited. Johansson, Holsanova, Dewhurst, and Holmqvist (2011) reported that participants would rather recall global and more abstract information about the stimulus such as “it was a living room” or “the walls were colored in blue” when gaze was kept fixed during imagery. In contrast, the descriptions given in the condition in which eyes could move freely rather referred to referents, states and events of the stimulus, e.g., “the man was digging”. It is pointed out that the former more global information would also be expected to be perceived during visual perception with a fixed gaze, because it refers to the type of information that can be gathered through a single fixation and the surrounding peripheral information. This exact analogy between (fixed gaze) vision and (fixed gaze) imagery is also found in the computational model. An eye movement (i.e., an overt attention shift) is employed exactly when attention is to be shifted beyond what would be accessible by covert attention shifts. This means, the model could also only instantiate that information that requires no such overt attention shifts in a simulation of a fixed gaze mental imagery task. That information would then naturally be of the kind reported for fixed gaze vision, i.e., rather global and abstract information.

6.3.3 Individual Differences in Eye Movements

The dispersion of spontaneous eye movements during mental imagery is subject to individual differences and has been linked to the participants’ score in the “Object Spatial Imagery and Verbal Questionnaire” (OSIVQ) of Blazhenkova and Kozhevnikov (2009). This questionnaire assesses individual differences in cognitive style with respect to one’s ability and preference to use object imagery (i.e., visual mental imagery) and spatial mental imagery. The two scores for object and spatial mental imagery are negatively correlated to each other which indicates a trade-off between the two types of mental imagery (Kozhevnikov, Blazhenkova, & Becker, 2010). Johansson, Holsanova, and Holmqvist (2010) report a negative correlation between the spatial dispersion of eye movements during mental imagery and the spatial mental imagery score of the OSIVQ. That is, the stronger the preference/ability of a person to use spatial mental imagery, the lower the dispersion of spontaneous eye movements will be. There are two ways to account for this finding that can be derived from the model of PIT. The first possibility is that people with a preference to use spatial mental imagery have the skill of using spatial mental imagery very efficiently. Such efficiency could be understood as being able to instantiate spatial mental concepts, such as spatial relations, with particularly short attention shifts. That is, the concept left-of would be instantiated by a shorter vector by a participant with a high spatial mental imagery score than for a participant with a lower spatial mental imagery score. The shorter the vectors used in the instantiation process, the faster one can imagine spatial configurations, because reaction times depend on the length of the attention shift. This aids one’s ability (and thereby likely also one’s preference) to use spatial mental imagery. Shorter attention shifts naturally lead to a lower dispersion of the overall pattern of (overt) attention shifts.

The second possibility offered by the model is that people with a high spatial mental imagery score will mentally imagine much less visual information, e.g., shapes, textures, than a person with a low spatial imagery score. The reason is that the spatial mental imagery score is negatively correlated with the object (i.e., visual) imagery score which indicates the preference/ability to imagine visual information. When less shape information is instantiated, the instantiation of the spatial relations will in consequence utilize shorter attention shifts. The reason is that the instantiation of, for example, left-of is context-sensitive so that available perceptual information of the shape of a referenced entity will affect the length of the vector of left-of proportional to the extent of the entity’s (imagined) shape. Section 3.2.2 and Section 5.1.3 elaborate on the mechanisms of this context-sensitivity. Concretely, when the shape of an entity is not instantiated, its shape is abstracted to a point with no extent. For such a shape-less entity the length of the spatial relations is not affected so that the default short length is used. This property of the model can be observed in Figure 6.4. Table 6.3 shows a comparison of the employed overt and covert attention shifts for the two conditions.
------------------------------------
Numerous recent experimental studies have shown that, when people hold a visual image in mind, they spontaneously and unconsciously make saccadic eye movements that (at least partially) enact the stimulus-specific pattern of such movements that they would make if actually looking at the equivalent visual stimulus (Brandt & Stark, 1997; Demarais & Cohen, 1998; Spivey et al., 2000; Spivey & Geng, 2001; Gbadamosi & Zangemeister, 2001; Laeng & Teodorescu, 2002; de’Sperati, 2003; Johansson et al., 2005, 2006, 2010, 2012; Humphrey & Underwood, 2008; Holšánová, 2010; Holšánová et al., 2010; Sima et al., 2010; Bourlon et al., 2011; Fourtassi et al., 2011, 2013; Johansson, 2013; Johansson & Johansson, 2014; Laeng et al., 2014; see also Clark, 1916; Jacobson, 1932; Totten, 1935; Altmann, 2004; Martarelli & Mast, 2011).

Furthermore, imagery is disrupted (to a greater or lesser degree) if someone who is holding an image in their mind either restrains themselves (to the limited degree that this is possible) from making eye movements, or else deliberately moves their eyes in an image-irrelevant way, thus disrupting the spontaneous saccadic pattern (Antrobus et al., 1964; Singer & Antrobus, 1965; Sharpley et al., 1996; Andrade et al., 1997; Ruggieri, 1999; van den Hout et al., 2001, 2011; Kavanagh et al., 2001; Laeng & Teodorescu, 2002; Barrowcliff et al., 2004; Postle et al., 2006; Kemps & Tiggemann, 2007; Maxfield et al., 2008; Lee & Drummond, 2008; Gunter & Bodner, 2008; Lilley et al., 2009; Jonikaitis et al., 2009; Engelhard et al., 2010, 2011; Laeng et al., 2014). This issue has been much researched lately, not so much because of its significance for our understanding of imagery, but because of its possible relevance to the understanding of the psychotherapeutic technique known as EMDR (Eye Movement Desensitization and Reprocessing), which is widely used in the treatment of Post-Traumatic Stress Disorder (PTSD), and which may perhaps owe its effectiveness largely to the fact that deliberate eye movements tend to disrupt any concurrent imagery. In EMDR treatment, patients are induced to deliberately move their eyes back and forth whilst visually recalling the events that have traumatized them; it is claimed that this leads to a significant reduction in the vividness of their memories of those events, and of the distress, and consequent symptoms, that those memories cause (Shapiro, 1989a, 1989b, 2001; Shapiro & Forrest, 1997; Mollon, 2005). Studies of therapeutic outcomes seem to bear out claims for EMDR’s effectiveness (Carlson et al., 1998; Van Etten & Taylor, 1998; Shepherd et al., 2000; Power et al., 2002; Ironson et al., 2002; Bradley et al., 2005; APA, 2006; Bisson et al., 2007; Högberg et al., 2007, 2008; van der Kolk et al., 2007; Rodenburg et al., 2009; Kemp et al., 2010).

Although the mechanisms and real therapeutic effectiveness of EMDR remain controversial (for negative opinions, see: Lohr et al., 1998, 1999; McNally, 1999; Herbert et al., 2000; Davidson & Parker, 2001; Taylor et al., 2003; Justman, 2011; – for defenses and more positive assessments see: Perkins & Rouanzoin, 2002; Schubert & Lee, 2009; Gunter & Bodner, 2009; Cukor et al., 2010), the disruptive effect of deliberate eye movement upon visual imagery appears to be well established, and it implies that the eye movements that spontaneously occur when people visualize things (or, at the least, the brain processes that initiate and control these movements) are not mere accompaniments or epiphenomena of the imagery, but are (as enactive theory would lead one to expect) a true, functionally significant part of the physiological process that embodies it.[46] (However, Mast & Kosslyn (2002b) argue that the eye-movement evidence can also be accommodated to quasi-pictorial theory.[47])[/spoiler]

----------


## ATA

TEXT WILL BE EDITED ( complete 15%)

b)Eye movemets can disrupt imagery by causing a shift of attention to the physical sight.


Based on subjective measurement and sensory/mutisensory integration theory

Siganl priority - P
Preception priority of signal 0-5 , 3 is border of consciousnes preception .
Higher number lower priority , priority 1 is "focus" 

Use of system resources - R
1R = 1/100 of resources aviable in normal state of wakefullnes

Data streams (vision):
Vison data stream 1 - physical vision (FV)
Vison data stream 2 - from memory
Vison data stream 3 -  data 
Vison data stream 4 - vizualization
Vison data stream 5 - "hypnagogic"
Vison data stream 6 - "dream"


Normal perception with eyes closed includes both physical and non-physical data stremas. 
It is not easy to separate them because they overlap, black on black is the worst case scenario.

FV- physical vison
NV- nonphysical vision

Streams compete for resources based on their priority 


NV picture stabilizes if prirority of NV are at least larger than P(FV) , idealy if P(FV) is above 3
NV picture destabilizes if priority of FV is larger P(NV) 
* P are after focus on vision or eye movement



*3 is a limit of consciousnes preception when signl break this barier down to higher priority is consciously precieve, when signal move above 3 is not longer precived
this chnage is noticable in relaxation but is hard to describe it its like change from 2D to 3D balacknes diffent preception of space .This change have different levels i suspect it depends how much senses is turn off (vision , proprioreception..


*Pasive relaxation - red behaind the eylids*
*intreaction of thoughts , NV and FV priorities


*Pasive relaxation - change of position , hypnagogic state, sleeap signals , dream*

    about 30 mins of pasive relaxation before recording “image” , i try to sleeap and change position minimal 3 times (left,right,left)
    i notice some hypnagogic imaginery before recording
    recording start few second before chnge position to laying on left side (red dots - eyes open)
    brown time line is changing position forhead against wall of the bed - notice blue line - physical touch in priority
    greean line -physical vision
    light blue time line - stable hypnagogic state constnt flow of images and short scenes ,hyp phase about 0,5-1,5 (depend of NV priority),orange line in P
    yellow time line + blue line - Sleep signals ,touch based dream , somthing else? This need more data
    Black time line in R - Probalby lost of consciousnes or very weak
    Indigo time line - dream , notice also increase in R
    Unfortunetly i recording only NV,FV,FT and R and large part of intresting data is lost, experinc continue about next 20min dream,hypnagogic stability control , 2x false awakening .Without data from other senses is FV,NV almoust meaningless



---------
EOG and automatic drawing test 10min
    Automatic drawing of FV and NV
    horizontal EOG

Goal : compare EOG (delta amplitude ) to increase FV priority

Result :
    all FV priority peaks found in EOG (+-5s drawing is not that precize ,recording about 9min )
    few peaks in EOG but not in drawing –) only in parts where priority is high and eye movemt or focus to FV no longer incerease it OK

idea:Audio EOG biofeedback to minimalize eye movements

-------
*Pasive realx + sugestion not to move eyes and breathing to eyes about 1 min*


    preception priority P (0-5, lower have higher priority )
    NV non-physical vision
    FV physical visoon
    P-FV up -) P-NV down
    P-NV up -) P-FV down
    P-NV up peaks (value decrease) = “hypnagogic images”
    P-NV large down peaks or P-FV large up pekas = eye movement of focus to FV
    orange arrows area = behaind the eyelids backnes in this cases rednes (focus to FV)
    orange points P-FV up and P-NV up = other sense influence,thoughts,integration .. (need more research)

*Pasive relax - morning*

    red dot = sneeze
    pimk dots = large eye movements (also possibe focus to FV without the movement but less probable )
    RED-FV , GREEN-NV

*Active relax - focus to breath and heart beat*


    evening 20:30
    hard to focus
    chaotic thoughts
    cognitive overload (tetris effect)about 6h watching new TV serial , this increase base P-NV
    P-NV higher than 3 after 15s (probably due the cognitive overload)
    T 2,75 P-NV and P-FV decrease probaly due focus to other senses or thought
    T pink line = auto eyes open , few times durring the session my eyes slightly open by themselves
    T green line = dream scene , no loger flashes of image but scene (may be caused by visualization , automatic asociation about thought tema , or themselfs)
    visualized ones have higer P than asociation , asociation higher than automatic (in similar time in one session )

    P-FV below 3 = very important moment mean physical vision is turn off and allow NV disinhibition
*
Active relax - focus to breath and heart beat - test 2*


    same day evening 22.00
    more awake than in previous test but less images
    P-NV higher than 3 after about 18s (probably due the cognitive overload)
    T 0:18-0:45 asociation created images i remeber few names from serial
    T 1:08 P-FV and P-FV go down probably because start focusing to breath and heart rate (probably increase of propriorecoption P )
    T 4:45 P-FV uder 3
    T 6:22 - 9:00 probably proprioreception effect
    gereen line above T 12:30 active vizualization of flight on dragon (scene form serial) ,2 scenes before caused by mix of visualization and asociation (more asociation)


*Some test (second part of larger test unfinished )*
  before drawing 12 min of relax and eye open

 T around 1:45 green line = vizualization -) images
 T around 2:00 yellow line = after vizualization automatic asociation nad scene with lower P

*Realx - Breath - morning*
   9:30h


    shorter inter flash intreval than in other test (morning ?)
    P-NV stabil larger than 3 1:40
    P-FV below 3 : 5:00
    T red line = behaind the eyelids blacknes in this cause redness in this sessoin cause problems (too much light in the room)
    T 10-11 focus to FV redness
    viz = active vizualization

*EOG biofeedback*
    EOG amplitude(4,5uV+) to audio volume as feedback



*Relax - guided 7min*

    FP physical proprioreception
    P-FP allow distiguish between eye movement and focus without movement
    T 10:45 - 11:30 increase of P-FP : focus to body ? cause P-FV go below 3 ? –) need more tests
TO DO –)increase vertical resolution and add more senses

----------


## ATA

*  30% complete*


4 porobuzení ani jedno si vědomě nepamatuju takže poznámky jsou pouze na zakaladě analízy signálu

*Pokus 1*

(R)
-2,5 aktivace FA zvýšený ruch z okolý ?
5 náhlý zvuk aktivace T by mohla být upřením pozornosti na uši 
15 pohyb oka/zamreni na zarak  blokuje T,P
17,5 aktivace vnímání těla P,T


(B)
-2,5 start reaktivace "vědomí"
0  aktivace R celk na max 
postupná aktivace bdělosti z rostoucí senzorikou



A, T ,V ,P

(P3)
A - 0 
T - 6
V - 12
P - 17,5
probouzení zvukem ?
5s náhlý zvuk nebo pozornost na FA
15s VT intrekace možna podívání se na část těla 
60 vyhrává pozornost  na na P,T a A klesá , změna z poslouchání  na vnímání těla 

Rekonstrukce:
-2,5 Nejaký zvuk z okolý spustil probouzecí reakci mysl aktivuje více A aby se zjistilo o co jde 
0 vědomí se aktivovalo na plných 100R ale halavně povědomá část  R dostupné pro vědomí je Ra a je ze uačátku použito na zpracování senzoriky postupne se aktivuje bdělost B
5 Náhly zvuk a nejpiš presunuti časti pozornosti na uši která poté klesá
12,5 aktivace fyzického zraku 
15 peak T a V rostoucí vnímáni hmatu nejspíše vyvolalo pohyb očí aby se podivaly co se děje (automaticky pozornost nebyla na očích)
17,5 aktivovala se propriorecepce a bdělost dosahla 50%
dale se  pozornost presouvá od slyšeni na vnímáni těla pomoci hmatua propriorecepce.

Tělo a mysl se aktivovala ve stejnou dobu 17,5.
? - kolik je potreba B a Ra na to abych si vzpoměl na metody a provedl 
Nejlepší časove okno je zde pro každý smysl jiné = čas než je pod P3

*Pokus 2*

*spodní není T ale A
A,T,V,P

-2,5 probouzeni díky zvuku (nebo se možná zvuk vždy aktivuje jako první ?)
celou dobu pozornost na FA
14 aktivce F zraku 
24 prenesení více pozornosti na sluch nebo větší kravál v mistnosti , presun je tak intenzivní že deaktivoval FT a FP
FP zustava deaktivovanéa FP se vraci 
45+ pokles fyzických smyslů  bud se je nejaka aktivita nefyzckých nebo stráta bdělosti

*Pokus 3*


A,P,T,V

Zase aktivace pomocí zvuku
2,5 aktivace slabé NV (hypnapompické obrazy) 
4 aktivace FT,FP nejspíše cuknutí svalu mohlo by byt take zpusobene tou hypnagigijí něco jako test paralízy
5-15 mohla by to být aktivace paralízy a kontrola pomoci mikro zaškubů svalů , pozornost je na sluchua dovoli NV nerušene růs nejspiše do te míry že se pozornost presunula na NV to již bylo dopdt stabilní vyplo zybtky FV a začal sen nejdrive asi malo kvalitní a 20+ se dále stabilizoval.
peaky FV budou asi enjakou iontreakci z jiným smyslem
Vědomí se poradne aktivovalo až ve snu/hypnapomipiji nedá se určit uroven lucidity  vypadá to spíše jen na sledování obrazu NP zaple nebylo.

Možná aplikace pro praxi: Soustředení na FA odvede pozornost a obrazy se mohou volně formovat , odvádí pozornost i od těla a tím zpomaluje jeho aktivaci nebo  muže i vypnout.
Zapnutí paralízy jemným pohybem svalu ?


*Pokus 4*



Hodně zajímave predchizi vypdaly spíše na prudči probuzení z NREM tohle vypdadá na pozvolné samovolné probouzneí po REM fázi
Je tam residialni R a i trocha bdělosti NV a NP jsou taky aktivnejší než u predchozích testů.
-7,5 neco začalo aktivovat vědomí možná přirozené probouzení
-2,5 tohle je divné něco jako miro sen/záblesk obsahující propriorecepci a zrak   
0  R na 100 a docela slušná aktivace bdělosti ketrá začala prudce soupat
0-9  sen  ale hodně slabý asi jen težko rozenatelne obrazy a vnímání nefyzického těla
Tohle je je nejlepší šance na AC a LD za celej den uspšnost ttu muže být okolo 80% když se to udelá zprávně trošku zrada je v tom že trvá jen 5s. (17,5  když bereme i část z nižší bdělostí a menšim N)
Pokud by se použila nejaka technika v tech 10s tak je uspěch teměř zaručenej.
9 aktivuje se sluch a vypíná NV
divnej je ten pokles bdělosti možna menši priorita ukolu že sen byl zajimavější
12 možna chvilka pozornosti na NP tělo nebo jeho slabej pohyb vyvolal peak v FT
14 reaktivace NV a pokles FP díky tomu
17,5 pohyb aktivace FP a FT deaktivace NP
27 pozornost na sluch
32 pohyb očí aktivace FV
postupné zyvšovní pozornosti na sluch odvádi pozornost od zrakua ten se pozvolna vypíná a hlavne se nepohly očí a priorita NV muže vklidu růst.
56-70 peaky jsou nejspiš díky zamření na tělo 


17,5 s po probuzeni stále ješte ve "snu"

-------------


-15 pozustatek vnímání NP těla ma podezdřele vysokou prioritu
-7 start reaktivace vědomí 10R pvaděpodobne residualní bdělost z REM
-6 aktivuje se NP a FA 
-2,5 aktivace NT NV
Nemuže to jít pod 3 pred t0 protože tam není vědomí pozorovatel jen podvědomí  
0 start pozvolného zyvšovní priority F smyslu
0 start snu podle priority to vypda na hodne kvalitn obraz nejspiše primo sen je zapojeno i velice silné vnímání těla nejpiše ve snu ale je tu i možnost že je to vnímání N těla ležicího na posteli 
   sen obsahuje i hmat 
1 sen ma vysokou prioritu a vypnul FA
8 zde je vic možnosti co se mohlo stát neda se rozlišit pričina a nasledek sen skončil nebo se hodne destabilizoval vypl se NT a zapl FA , zapnutí FA dočasně zmenšilo FV 
8-22 vede NV pri pozornosti na NV ne pokles NP
15 reaktivace NT
22 pozornost uprena na fyz tělo peak FT a zvýšeni NT zpusobyl pokels NV a vypnuti NP
26 pozornost na sluch FA vypíná NV 
46 pohyb očí FT+ , FV+   pokles FP a vypnuti NT
nasleduje prsun pozornosti od sluchu FA na fyzické tělo FT a FP 
52-65 efekt FA nejspiše zvuky z mistnosti na osatni smysly 
65 více pozornosti na zrak
bdělost 0-25 hodně senzoriky ale bdelost  max na pozorování myslenkovy procesy hodně omezeny ,nemožnost normálně mysleta plánovat nebo velice silně omezena
*neposkytuje info o luciditě






-15 NP a NT na 3  residualni snove tělo   Rc na 60 vysoce aktivní podvědomí nejspiše situace chvlku po zkončení snu v REM
-10 start reaktivace vědomi (Ra) a sluchu 
0 start vědomého vnímání ,start snu
*takle nejak by mohlo vypadat LD/OOBE po probuzení lucidita ale nebyla měřena takže to mohl být sen  
*luciditu asi bude potraba začit nejak merit jinak se neda poznat co je sen a co LD 
5-15  REMs ? tohle vypdá jako pohyby očí ve snu 
15 konec vizualní části snu dal pokračuje asi jen matnýmy obrazy vnímání N těla je na stejné urovni 
21 vypnutí NP a NT
21-25 další vizuální sen z pohybem očí 
25 sen slábne a pri jednom z pohybu oči se na nej prevede pozornost a tim se aktivuje FV,FP,FT (se vrací nad 3)
25-44 slabe NV obrazy
FV FP peaky = pohyb pčí 
pozornost na FV
57 prudká aktivace sluchu a pozornost plne na zraku možná i otevření očí 
74 pohyb F těla
-----------------
Odpoledne probuzení díky otlačení ucha

----------


## ATA

https://brmlab.cz/project/brain_hacking/msi2

----------


## IAmCoder

I was re-reading The Brain, edited by G. Edelman and J.P. Changeux, earlier this year and keep thinking back to this finding:





> One group of cells, discovered by David Hubel and Torsten Wiesel in 1959, will only respond to lines of particular orientation, since the orientational preferences of different cells are different and each responds increasingly more grudgingly as one departs from the preferred orientation until the response disappears at the orthogonal orientation.



What if we built a dreamachine that flickers lines from horizontal to vertical, or perhaps spins them at various frequencies? I believe the inventor of the machine was inspired by hallucinations triggered from the shadows of trees while dozing in a train. Perhaps it should be only vertical lines passing really fast...

----------


## ATA

Depent what effect you want to achive but for LD is useless .  Read part about geometic imagery in start of page.

----------


## kadie

This is very interesting ATA. Im wondering though how to differentiate when you are say meditating or dreaming for example, how do you tell if you are using memory that is stored in the brain, or images that are from the eyes. I'm sure I am not asking this right and I'll have to take some time to get to the crux of my own question...lol..sorry.

( I want to add a note here, maybe you have some insight)
* When I practice, AP/AT as well as when meditating, I tend to get more of a funnel hallucination when in the transition between relaxation and projection. However, when just looking at the black and white images of "Retinal/Cortex" black and whites, I am more attracted to the Retinal images. All of them. Do you have any idea why this is?

----------


## ATA

I have a hypothesis but is not proven i assume there is different phase of gamma waves . In my automatic drawings i draw something like different place on phase of wave for different sources of data.Also wave have peak amplitude of peak is like intensity and is width like sharpenes of focus. This is only gamma there is modulation (cross frequenci modulation) by cognitive cycle theta wave and many other things. 
I can write more if you want but is very complex .

http://bernardbaars.pbworks.com/f/cr...mem+&+attn.pdf
Frontiers | Divisive Normalization and Neuronal Oscillations in a Single Hierarchical Framework of Selective Visual Attention | Frontiers in Neural Circuits
Global Workspace Dynamics: Cortical

----------


## kadie

It is very complex and I will have to follow slower. 
In post #4, you wrote this...
"Based on subjective measurement and sensory/mutisensory integration theory

Siganl priority - P
Preception priority of signal 0-5 , 3 is border of consciousnes preception .
Higher number lower priority , priority 1 is "focus"

Use of system resources - R
1R = 1/100 of resources aviable in normal state of wakefullnes

Data streams (vision):
Vison data stream 1 - physical vision (FV)
Vison data stream 2 - from memory
Vison data stream 3 - data
Vison data stream 4 - vizualization
Vison data stream 5 - "hypnagogic"
Vison data stream 6 - "dream"

So I think this is the area that I am asking about, is that right?

Part 2 is when and how do you know you are sensing stimulus from what data stream, or is something that is already proven.
I think this is fascinating stuff, and want to learn so much more. Thanks ATA

Also, dont wear yourself out. If you have other stuff to do, I understand completely.

----------


## ATA

Is not proven is only subjective now but is possible acording  science knowledge i have .
In my graphs i only use FV as data stream 1 and NV usualy as the rest.
Also data stream can have many subtipes in some cases  is easi to determine source by apperance.

For instance short falshes of images have probaly diffrenet source than short dream scenes
http://fisiologiafmabc.com.br/Diekel...uroscience.pdf

And this usefull for RV
http://cosy.informatik.uni-bremen.de...is_imagery.pdf

*i try write some more user frendly version of some parts usable for RV and LD in future

----------


## ATA

*The intention to see something mentally and its uncertainty* 
In normal perception phys. reality we do not have much of a chance to realize that vision is modular process and what it actually means to see something.
If you imagine something these modules work very independently.

IST8A F08 Lecture 9
ISTF08 Lecture 10

Processing of visual information in the brain is divided into two major streams, one of them provides information for the identification of objects and the other how to manipulate with them .


*Spoiler* for _Two visual systems_: 




Two visual systems re-viewed
Two visual systems re-viewed. - PubMed - NCBI

To be able to grasp an object successfully, for example,
it is essential that the brain compute the actual size of the object,
and its orientation and position with respect to the observer
(i.e. in *egocentric coordinates*). We also argued that the time
at which these computations are performed is equally critical.
Observers and goal objects rarely stay in a static relationship
with one another and, as a consequence, the egocentric coordi-
nates of a target object can often change radically from moment
to moment. For these reasons, it is essential that the required
coordinates for action be computed in an egocentric framework
at the very moment the movements are to be performed.
Perceptual processing needs to proceed in a quite different
way. *Vision for perception* does *not require the absolute size of
objects or their egocentric locations to be computed.* In fact, such
computations would be counter-productive. It would be better to
encode the size, orientation, and location of objects relative to the
other, preferably larger, objects that are present. Such a *scene-
based frame of reference permits a perceptual representation of
objects that transcends particular viewpoints, while preserving
information about spatial relationships (as well as relative size
and orientation) as the observer moves around.* 

These considerations led us to predict that normal observers
would show, under appropriate conditions, clear differences
between perceptual reports and object-directed actions when
interacting with pictorial illusions, particularly size-contrast illu-
sions. This counter-intuitive prediction was initially based on the
simple assumption that the* perceptual system could not avoid
computing the size of a target object in relation to the size of
neighbouring objects, whereas visuomotor networks would need
to compute the true size of the object.* This prediction was con-
firmed in a study by
Aglioti, Goodale, and DeSouza (1995)
which showed that the scaling of grip aperture in-flight was
remarkably insensitive to the Ebbinghaus illusion, in which a
target disc surrounded by smaller circles appears to be larger
than the same disc surrounded by larger circles. In short, max-
imum grip aperture was scaled to the real not the apparent size
of the target disc.

According to our two visual systems model, vision for action
works only in real time and is not normally engaged unless the
target object is visible during the programming phase, that is
when bottom-up visual information is being converted into the
appropriate motor commands. When there is a delay between
stimulus offset and the initiation of the grasping movement,
the programming of the grip would be driven by a memory of
the target object that was originally derived from a perceptual
representation of the scene, created moments earlier by mecha-
nisms in the ventral stream

Thus, we
would predict that memory-guided grasping would be affected
by the illusory display, because the stored information about the
targets dimensions would reflect the earlier perception of the
illusion. In fact, a range of studies has shown that this is exactly
the case In the case
of the* dorsal stream this is not so: indeed the coding of the target
has to be as far as possible absolute, and needs to be referred to
an egocentric rather than a scene-based framework*. Non-target
visual information needs to impact dorsal-stream processing
dynamically, thereby influencing the moment-to-moment kine-
matics of the action. It seems likely that this happens without
the visual coding of target information being itself modulated:
in other words that both target and non-target information each
modulate motor control directly and quasi-independently 

Matters are quite different in the dorsal stream,
where the peripheral field is relatively well represented. Indeed
some dorsal-stream areas, such as the parieto-occipital area
(PO), show almost no cortical magnification at all, with a large
amount of neural tissue devoted to processing inputs from the
peripheral visual fields








*Stream for action*
- egocentric = 1st person , from the body,  in relation to the body
- Distance and size are in the absolute values
- divided into stream for the reach and stream for grasp.

Reach and Grasp
Reach "count" where the object is  in space and its orientation
Grasp  "count" the exact shape and size


*Spoiler* for _Reach and grasp_: 



Different evolutionary origins for the Reach and the Grasp: an explanation for dual visuomotor channels in primate parietofrontal cortex
Frontiers | Different Evolutionary Origins for the Reach and the Grasp: An Explanation for Dual Visuomotor Channels in Primate Parietofrontal Cortex | Movement Disorders

The Reach is mediated by a dorsomedial pathway and transports the hand in relation to the targets extrinsic properties (i.e., location and orientation). The Grasp is mediated by a dorsolateral pathway and opens, preshapes, and closes the hand in relation to the targets intrinsic properties (i.e., size and shape).




A number of patients with damaged visual inputs to the Grasp, but not the Reach, pathway have been described (50, 51). These patients have no problem reaching to the location of a visual target and consistently touch it on the first attempt; however, they use an open hand to do so and only close their digits to grasp the target after touching it. Thus, these patients seemingly adopt a modified Touch-then-Grasp strategy. They use vision to determine the targets extrinsic properties (location) but are unable to use vision to determine the targets intrinsic properties (size and shape) and thus cannot preshape the hand to Grasp prior to target contact. Instead they rely on haptic cues after target contact to shape their digits to the contours of the target in order to Grasp it.

---
Cavina-Pratesi and colleagues (52) describe the reverse condition, in which a patient cannot perform a visually guided Reach but can perform a visually guided Grasp. The patient, M.H., suffered an anoxic episode, disrupting visual inputs to the Reach but not the Grasp pathway. M.H. accurately opens, preshapes, and closes his hand to Grasp a visual target, but only if the target is located adjacent to his hand; i.e., if he doesnt have to Reach for it. If he does have to Reach for it, he must first locate it by touch before shaping his hand to Grasp it: Presumably M.H., wittingly or unwittingly, compensates for the direction and distance errors resulting from his damaged visual reaching network, by habitually opening his hand widely: the wider the hand aperture, the higher the probability of successfully acquiring the object. M.H.s visually guided Reach movements are inaccurate regardless of whether the movement is directed inward (toward his body) or outward (away from his body), indicating that his deficit is related to visual guidance of the Reach and not the location of the target within egocentric space. Thus, M.H. can use vision to guide his hand in relation to the intrinsic (size and shape) but not extrinsic (location) properties of a target.

----------


## ATA

*Image Quality vs.  Information Quality*
Before the attempt to get somewhere or seen something we should determine whether we want most accurate with minimum distortion or good guality sensory "image" .  .It is possible to have both undistorted information and  image quality but achieving this is difficult and most cases will be needet a compromise between the two.

Creating mental images, dreams, LD ... works backwards in coparationwith our normal physical perception (image-concept vs. concept-image).

For example, if you look at this picture:
46198-smrk.jpg
Your brain  start by identification of  contrast, shape, color and  gradually get to the fact that it is a tree, it is a conifer, spruce so the results will be the concepts of different levels of abstraction.

Now try imagine the oposite situation , try visualize TREE ,what do you see? One of the factors which affect the outcome is also the way you care for accuracy.

The result can only be an abstract concept tree which has no particular form, is rather a description of what the object must meet to be considered a tree but the description is imposible make with words.Next stage is if you put it into words so  next level have limitation of language. Most of people will go to next stage  and  will have idea of the real tree no tree on picture. Gradually add more restriction without knowing for instance type of tree almout none pick a palm tree in Christmas time. In the end get to one specific tree, with specific postion , size, color , texture...
Google_lucky.jpg

For acces to information is best leave it in conceptal form where information is more precize.
For LD is best good quliaty image . 

It's  like like a diffrenece  when you enter into the search filed in google images: TREE .The result will be all possible tree representations . And second otion is the "I feealing lucky" button - selects the most likely option.

----------


## ATA

*Assumptions in Visualization*
It turned out that I have about 68 assumptions that negatively affects  visualization (visual).

I try add to my system the possibility of change setup or turn OFF them  according to  task.

It seems that they have different origins from evolutionary,  physiological , beliefs and sugestions.

One of the assumptions that i managed to identify is the Gravity. Is easier visulize object on ground than in space . In space it have tentenci falling down.

Everyone have little diffrent setup of this "assumptions"

----

*Deconceptulization*

Try to reach out and grab the object by hand. You could also divide it the reach determines the position and attempt to grab is accurately determine the shape and size.This method forces the concept to take a concrete form (about how and why is in theory) .For start is best  imagine  hand connected to actualy perceived body and stretch it with intent to catch object .Object should take a precise shape and size at a distance of streched hand . If obejct is big  a elephnat for instance  after touching him we can only see a small part , so far best option is try walk backward. You can alsio try zoom out .

Its also possible use method where hand is not conncted to body .After some practice you no longer need imagine hand and suffice intention to reach out and grab object. 

Is possible  separate the individual elements and change the position of the concept in space without having other attributes such as the exact size, shape, color ,visual form  ... Selection of  object can be also made as if you use PC mouse where you select the area / window and  added intention of handling (also work with 3D volumetric selection )


The method is used both at the level where we have a concept without image  or  blurry very poor image with not-accurate determination of attributes.

---
If you are not able to turn off assumptions influencing visulization by intention use this trick . Imagine 2D window with clear borders it can be physical object or imagimed Pc window, it can be  mounted on different frames of reference such as the exact location in space relative to the body, eyes, head, concrete object ...

Using windows has a huge advantage in that it bypasses most of the assumptions and possible collisions .Image in window has far fewer restrictions because there does not necessarily apply physical laws is rather resembles a screen or PC or interface simulator.



*Simulator interface* 
You can control it  directly by intention or possible to view controls . Handy inspiration is some 3D modeling program look at some similar Pc pogram what its functions are, and what you can do.

Try it imagine a window and apple in it  now try change  intensities and the vector of gravity . If it work right it start move in window acording your changes. I destroy it by addng to much gravity  :smiley: 

Use windows also reduces the fear of mingling  phys. and non-phys. reality because space  where they can project is clearly marked.


*Gestures on image control*

Spinning the image by moving your fingers on the sides "wave" a good way to object to inspect from all sides, light-rotation it also stabilizes qulity of object by forcing brain to recompute values.

Changing the image-selection of other  representations of concept use  wrist movement up or down and click for select .Speed gestures affects the rate of change (due to many restrictions it is not easy for options to by shown side by side)

Caught  inside and slightly form above grasp object and  pull him out  form  window - a good way to make him 3D.

---
You can tes time to window how long last select it place and frame of reffernce . Image and window is more stable than normal visulization and if you have good strenght of intent .When you look to window after few mins it still be there with image . Work with eyes open. After some practice try the same with 3D box instead of window is harder becouse more assumptions for 3D obect but praxe with 2D and intetion use it as simulation emviroment may help. Create a box aronud yourself and change params.

----------


## ATA

*Ineteraction between propriorecption and vision.*

If you close eyes and move hand in dark room you still see the movement of hand .First i assume that it is separate sense but after some expedimnet i fount that is a influnec of vision by proprioreception.

The hand apper darker than suroundings . Darker than back  :smiley: 

Brain use assumptions : is imposible to see inside of solid object , 2 physical object cant occupy same space

First i notice it in opne eye visulization i try visulize apple inside the wall .More precize in place where is a wall but like in diffrenet dimension/realm to reduce interference. In this realm wall do not exist but visualized apple have same spatial cordination like the real wall.Non. physical vision is not restricted -no wall . But physical vision assume it is imposible see inside the wall this assumtion influnce priority of vision.

For this is usefull imagine priority of vision as heat map important object have high priority and attract attention .In wall physical vision  have no prority its turn off becuse no expect any visula data .Interstion is that in oter side of wall vison have some piority similar to closed eyes conditions.

Unfortunetly tick not work in me in closed eyes becouse im not sure where wall is it cant influnce vision but i know by proprioreception where body is . So if apple is visualised in coordinates inside  body image is much better.

----------


## snoop

> This is very interesting ATA. Im wondering though how to differentiate when you are say meditating or dreaming for example, how do you tell if you are using memory that is stored in the brain, or images that are from the eyes. I'm sure I am not asking this right and I'll have to take some time to get to the crux of my own question...lol..sorry.
> 
> ( I want to add a note here, maybe you have some insight)
> * When I practice, AP/AT as well as when meditating, I tend to get more of a funnel hallucination when in the transition between relaxation and projection. However, when just looking at the black and white images of "Retinal/Cortex" black and whites, I am more attracted to the Retinal images. All of them. Do you have any idea why this is?



I think while meditating and dreaming it is safe to say much of the visual information comes from memories, in fact for you to associate meaning with the images at all you would by nature have to remember stored profiles of past sensory stimulation.  The geometric shapes and things kind of outlined in ATA's other posts are more or less what you can nail to visual phenomena being influenced by neurotransmission and the way in which the brain functions/its structure. Things beyond this that involve concepts, such as people, places, things, etc. would obviously have to draw from memory at least in some way.

----------


## ATA

I hope i uderstand your question right - funnel halucination -did you ment images that are only on part of visual filed like looked trought some tube ? If yes this is caused by limitations in visual angle where you see sharp images in condition without eye movements. (more details in post Visual field limits ).Images in this stage are usualy in the start caused by consolidation of memory form hyppocamus to cortex , its like flash of image very short in start about 0,3s.Hyppocampus act as short term memory buffer is good to empty him before RV try to avoid unwanted intrection with day memory.Longer lasting images and scenes originate rpobably in cortex and also have more sources. Like visualization, memory,memory consolidation...

Behaind the eyelid backness is not good to focus to it usualy cause focuf to physical vision strem and things originate in retina and early visual cortex.Best is wait to momnet when physical vision turn off.

Also do not focus on geometrical shapes nad blobs and other things they are usualy in physical vission strem but can be also on other ones.Still focus on them get you usualy nowhere.In best case you can achive stabe geometric images . 

more info you found in part: b)Eye movemets can disrupt imagery by causing a shift of attention to the physical sight.   (still not complete)
------------------------
About visual information and memories is not that simple because there is many types memory systems.Depent of use of sytems visualization,image form memory,drem work in little diffrent wayes.
For instance is big diffrence if you semantic memory of some object = concept´, or if you use episodic memory best still in hyppocamus buffer = closer to precept have infomation of exsact shape , color ...



When images of familiar concepts are present on the retina, neurons in the human MTL encode these in an abstract, modality-independent5 and invariant manner6,7. These neurons are activated when subjects view6, imagine8 or recall these concepts or episodes9.

5.
Explicit encoding of multimodal percepts by single neurons in the human brain.

Different pictures of Marilyn Monroe can evoke the same percept, even if greatly modified as in Andy Warhol's famous portraits. But how does the brain recognize highly variable pictures as the same percept? Various studies have provided insights into how visual information is processed along the "ventral pathway," via both single-cell recordings in monkeys and functional imaging in humans. Interestingly, in humans, the same "concept" of Marilyn Monroe can be evoked with other stimulus modalities, for instance by hearing or reading her name. Brain imaging studies have identified cortical areas selective to voices and visual word forms. However, how visual, text, and sound information can elicit a unique percept is still largely unknown. By using presentations of pictures and of spoken and written names, we show that (1) single neurons in the human medial temporal lobe (MTL) respond selectively to representations of the same individual across different sensory modalities; (2) the degree of multimodal invariance increases along the hierarchical structure within the MTL; and (3) such neuronal representations can be generated within less than a day or two. These results demonstrate that single neurons can encode percepts in an explicit, selective, and invariant manner, even if evoked by different sensory modalities.

6.
Invariant visual representation by single neurons in the human brain.
It takes a fraction of a second to recognize a person or an object even when seen under strikingly different conditions. How such a robust, high-level representation is achieved by neurons in the human brain is still unclear. In monkeys, neurons in the upper stages of the ventral visual pathway respond to complex images such as faces and objects and show some degree of invariance to metric properties such as the stimulus size, position and viewing angle. We have previously shown that neurons in the human medial temporal lobe (MTL) fire selectively to images of faces, animals, objects or scenes. Here we report on a remarkable subset of MTL neurons that are selectively activated by strikingly different pictures of given individuals, landmarks or objects and in some cases even by letter strings with their names. These results suggest an invariant, sparse and explicit code, which might be important in the transformation of complex visual percepts into long-term and more abstract memories.

Imagery neurons in the human brain.
Vivid visual images can be voluntarily generated in our minds in the absence of simultaneous visual input. While trying to count the number of flowers in Van Gogh's Sunflowers, understanding a description or recalling a path, subjects report forming an image in their "mind's eye". Whether this process is accomplished by the same neuronal mechanisms as visual perception has long been a matter of debate. Evidence from functional imaging, psychophysics, neurological studies and monkey electrophysiology suggests a common process, yet there are patients with deficits in one but not the other. Here we directly investigated the neuronal substrates of visual recall by recording from single neurons in the human medial temporal lobe while the subjects were asked to imagine previously viewed images. We found single neurons in the hippocampus, amygdala, entorhinal cortex and parahippocampal gyrus that selectively altered their firing rates depending on the stimulus the subjects were imagining. Of the neurons that fired selectively during both vision and imagery, the majority (88%) had identical selectivity. Our study reveals single neuron correlates of volitional visual imagery in humans and suggests a common substrate for the processing of incoming visual information and visual recall.

------------------------------

Figure 4.2: The formal framework of PIT. The mental imagination of a scene starts with 1) the retrieval of a set of mental concepts from C-LTM which conceptually describe the scene; 2) these mental concepts are successively instantiated with perceptual information by the cyclic process of select-execute-identify; 3) an interpretation is drawn from all identified mental concepts with their instances of perceptual information; 4) this interpretation constitutes the mental image of the scene.

----------


## ATA

One hypothesis about RV and telepathy

 i assume recieving information is caused by so quntum effect and affect neurones in cortex.
Like i write in earlier post single neurones can represent very complex concept and you can activate this concept aby activation of the neurone.
RV work on concept level is  easier modulete activity of one neurone and get concept than try like reconstruct image form pixels .
In "pixels" case be need very precize activation of large portion of neurones in early visual cortex or retina. 
Modulation conceptual neuron seems like a much more efficient way how to comunicate/recieve infomation.

----------


## snoop

By the way, thanks for posting all of this ATA, you've made my job at researching all this stuff myself that much easier, I appreciate it.

----------


## ATA

This is only very small part of my notes .If you want some info about specific area i try to found related notes.

----------


## kadie

> *Image Quality vs.  Information Quality*
> Before the attempt to get somewhere or seen something we should determine whether we want most accurate with minimum distortion or good guality sensory "image" .  .It is possible to have both undistorted information and  image quality but achieving this is difficult and most cases will be needet a compromise between the two.
> 
> Creating mental images, dreams, LD ... works backwards in coparationwith our normal physical perception (image-concept vs. concept-image).
> 
> For example, if you look at this picture:
> 46198-smrk.jpg
> Your brain  start by identification of  contrast, shape, color and  gradually get to the fact that it is a tree, it is a conifer, spruce so the results will be the concepts of different levels of abstraction.
> 
> ...



Ok, for this part, would you say that people with very practiced visualization skills could visualize the tree nearly the same as looking with open eyes at a live tree? If so, how does one cross over from that acute vision to acute visualization? As in the studies of the blind lady or studies with paraplegia and phantom limb, what do you think is the process or mechanism that increases ones imagined vision to that of actual vision. Besides memory of course. I know that as we practice visualization in meditation or movement in LD, we are increasing the skills in that area, but is there confirmation by any brain studies that a certain area of the brain is affected?

Would that be in the upper ventral pathway?

Is proprioceptive another sense? Like the sixth sense in addition to the basic 5.

Holy carp! This is a lot of info and a lot of questions.

----------


## kadie

> I think while meditating and dreaming it is safe to say much of the visual information comes from memories, in fact for you to associate meaning with the images at all you would by nature have to remember stored profiles of past sensory stimulation.  The geometric shapes and things kind of outlined in ATA's other posts are more or less what you can nail to visual phenomena being influenced by neurotransmission and the way in which the brain functions/its structure. Things beyond this that involve concepts, such as people, places, things, etc. would obviously have to draw from memory at least in some way.



Thank you snoop for this. It helps me get more to the point. A little background might help....
When I was younger, I had a teacher that taught a class about being assertive. It was to help kids that were either passive or aggressive come to a middle ground and become assertive. During one of the weeks the class centered around relaxation and meditation. I took to it very well and have used the same method for over 30 years. As a matter of fact, I brought in a meditation tape for the class to use.(my step father at the time was deeply into mind over matter stuff and helped form a base for one of the top motivation speakers that is still giving seminars. You know the whole "Life Coach movement? That guy.) So I had a very early start on meditation and visualization and was a natural lucid dreamer since about 12 years old.

Now, I came to this site for remote viewing because a google search brought up WakingNomads RV links etc. I already had a lot of LD and AP practice. When I see people post that AP is simply a form of LD, I have to disagree. 
At this point, I realize that for some people it is, but I feel that what I experience as AP and the visuals I get are more than hypnogogic hallucination because I AP from a wake state and NEVER from dream state. What I see is a lot like traveling through a vortex of color and being propelled through space and nebulae. When Gab asked me last year to describe what I experienced, I had to look up images that most closely resembled what I saw when projecting. Last month, Sageous described what I experience as more of a transcendence, which is what I had been saying all along. So the reason why ATA's thread here interests me so much is for my own understanding of what and how my AP experience works and how to utilize some of his knowledge and notes to improve my RV drawings by turning off the visual part of my minds eye and turning on the other part.
The strange part is that during AP, I see (with my eyes closed) more of the Retinal type of images rather than the cortex images. That is what is kind of bugging me.

----------


## ATA

About quality of open eyes visualization i thing is possible but very problematic i try to do that but encounter many problems. In my best i get good image but still very far form real like , still transparent..

There are many ways how you can do that but moust of them end very bad for you.
Main problem is have visualization nad physical sight in same time so far is imposible use one without influencing other.
Is imposible answer your question now becouse is too complex i try put here in future more notes and some of them help you understand.

*Several areas of the brain show differential activity in the study using fMRI
to measure how humans manipulate mental imagery (Credit: Alex Schlegel)
----------------
There is far more senses than 5 know them and how works helps for methods like WILD .

*Proprioception*
Proprioception (/ˌproʊpri.ɵˈsɛpʃən/ PRO-pree-o-SEP-shən), from Latin proprius, meaning "one's own", "individual" and perception, is the sense of the relative position of neighbouring parts of the body and strength of effort being employed in movement.[1] It is provided by proprioceptors in skeletal striated muscles and in joints. It is distinguished from exteroception, by which one perceives the outside world, and interoception, by which one perceives pain, hunger, etc., and the movement of internal organs. The brain integrates information from proprioception and from the vestibular system into its overall sense of body position, movement, and acceleration. The word kinesthesia or kinæsthesia (kinesthetic sense) has been used inconsistently to refer either to proprioception alone or to the brain's integration of proprioceptive and vestibular inputs.

--
Ruffini Ending
slowly adapting,
low-threshold receptor,
which is constantly reactive during joint motion.
Additionally, these endings have
been found to react to axial loading and tensile strain
in the ligament, but not to perpendicular compressive
joint forces, revealing their importance in signaling
joint position and rotation, rather than direct pres-
sure. These characteristics are believed to be of
importance in the regulation of stiffness and prepara-
tory control of the muscles around the joint

Pacini Corpuscle
The Pacini corpuscle differs from the
Ruffini ending, in that it is a rapidly adapting, high-
threshold receptor sensitive to joint acceleration/de-
celeration that is able to sense mechanical distur-
bances occurring even at a distance

Golgi-like Receptor
Golgi-like ending is, there-
fore, silent in the immobile joint and only active at
the extremes of joint motion


'Intrafusal' muscle fibres contain 'muscle spindles,'
which signal passive, static, & dynamic muscle stretch


Golgi tendon organs signal tendon tension


Joint receptors are fast-adapting and signal joint angle
*The consensus is that muscle receptors (stretch receptors wrapped around the muscle fibres) play the major role in kinaesthesis
http://neurobiography.info/teaching....9&mode=handout

----------


## ATA

*Touch*

Nick's Teaching Website
Nick's Teaching Website

----------


## ATA

Expediment: 

P(FV) = priority of physical vision

Open eyes condition :
normal 2,3
focus  1,11
unfocused / focus on another sense 2,65

*50cm from wall*
P(FV) in distance/area
10cm   2,4
25cm   2,45
50cm   1,62 (1,03-2,7 depend of focus) 
inside wall 3,07 (FV off)
100cm   2,82
200cm   2,82

Eye closed 50cm from wall
P(FV) in distance/area
10cm   2,32
25cm   2,47
50cm   2,49
inside wall 2,5 
100cm   2,5
200cm   2,5
*not work dont know where is wall /if there is wall

Eye closed 50cm from wall + hand on wall 
P(FV) in distance/area
10cm   2,47
25cm   2,5
50cm   2,5
inside wall 2,8 if look 10cm left form hand 3,07 below hand
100cm   2,5
200cm   2,5
* some expectation of wall 

to do : hand on eyes , hand 25cm form eyes , head against a wall , some objects on head/ in front of eyes 

hand influnece  P(FV) only in area where are 

object dount work

hands on other side of object dount work

----------

